Compare commits

..

32 Commits

Author SHA1 Message Date
95fbc62486 chore: release (#87)
Signed-off-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
Co-authored-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
2024-04-23 09:41:56 +00:00
284ed8f17b feat: implement guest exec (#107) 2024-04-22 20:13:43 +00:00
82576df7b7 feat: implement kernel / initrd oci image support (#103)
* feat: implement kernel / initrd oci image support

* fix: implement image urls more faithfully
2024-04-22 19:48:45 +00:00
1b90eedbcd build(deps): bump thiserror from 1.0.58 to 1.0.59 (#104)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.58 to 1.0.59.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.58...1.0.59)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-22 05:36:20 +00:00
aa941c6e87 build(deps): bump redb from 2.0.0 to 2.1.0 (#106)
Bumps [redb](https://github.com/cberner/redb) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/cberner/redb/releases)
- [Changelog](https://github.com/cberner/redb/blob/master/CHANGELOG.md)
- [Commits](https://github.com/cberner/redb/compare/v2.0.0...v2.1.0)

---
updated-dependencies:
- dependency-name: redb
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-22 05:36:13 +00:00
d0bf3c4c77 build(deps): bump reqwest from 0.12.3 to 0.12.4 (#105)
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.12.3 to 0.12.4.
- [Release notes](https://github.com/seanmonstar/reqwest/releases)
- [Changelog](https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md)
- [Commits](https://github.com/seanmonstar/reqwest/compare/v0.12.3...v0.12.4)

---
updated-dependencies:
- dependency-name: reqwest
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-22 05:36:06 +00:00
38e892e249 feat: idm v2 (#102)
* feat: rebuild idm to separate transport from content

* feat: fast guest lookup table and host identification
2024-04-22 04:00:32 +00:00
1a90372037 build(deps): bump sysinfo from 0.30.10 to 0.30.11 (#99)
Bumps [sysinfo](https://github.com/GuillaumeGomez/sysinfo) from 0.30.10 to 0.30.11.
- [Changelog](https://github.com/GuillaumeGomez/sysinfo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/GuillaumeGomez/sysinfo/compare/v0.30.10...v0.30.11)

---
updated-dependencies:
- dependency-name: sysinfo
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-19 20:35:57 +00:00
4754cdd128 build(deps): bump rustls from 0.22.3 to 0.22.4 (#101)
Bumps [rustls](https://github.com/rustls/rustls) from 0.22.3 to 0.22.4.
- [Release notes](https://github.com/rustls/rustls/releases)
- [Changelog](https://github.com/rustls/rustls/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rustls/rustls/compare/v/0.22.3...v/0.22.4)

---
updated-dependencies:
- dependency-name: rustls
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-19 20:35:44 +00:00
f843abcabf docs(dev): update order of setup instructions (#98)
This change corrects the steps order to have the krata daemon started
before starting the krata networking service

Co-authored-by: Khionu Sybiern <khionu@edera.dev>
2024-04-18 18:00:48 +00:00
e8d89d4d5b build(deps): bump serde from 1.0.197 to 1.0.198 (#97)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.197 to 1.0.198.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.197...v1.0.198)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-17 05:17:00 +00:00
4e9738b959 fix: oci cache store should fallback to copy when rename won't work (#96) 2024-04-16 17:05:24 +00:00
8135307283 feat: oci concurrency improvements (#95)
* feat: implement improved and detailed oci progress indication

* feat: implement on-disk indexes of images

* oci: utilize rw-lock for increased cache performance
2024-04-16 16:29:54 +00:00
e450ebd2a2 feat: oci tar format, bit-perfect disk storage for config and manifest, concurrent image pulls (#88)
* oci: retain bit-perfect copies of manifest and config on disk

* feat: oci tar format support

* feat: concurrent image pulls
2024-04-16 08:53:44 +00:00
79f7742caa build(deps): bump serde_json from 1.0.115 to 1.0.116 (#90)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.115 to 1.0.116.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.115...v1.0.116)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-16 07:17:11 +00:00
c3c18271b4 build(deps): bump ratatui from 0.26.1 to 0.26.2 (#89)
Bumps [ratatui](https://github.com/ratatui-org/ratatui) from 0.26.1 to 0.26.2.
- [Release notes](https://github.com/ratatui-org/ratatui/releases)
- [Changelog](https://github.com/ratatui-org/ratatui/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ratatui-org/ratatui/compare/v0.26.1...v0.26.2)

---
updated-dependencies:
- dependency-name: ratatui
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-16 07:16:53 +00:00
218f848170 chore: release (#41)
Signed-off-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
Co-authored-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
2024-04-15 19:15:00 +00:00
9d8c516a29 build(deps): bump sysinfo from 0.30.9 to 0.30.10 (#86)
Bumps [sysinfo](https://github.com/GuillaumeGomez/sysinfo) from 0.30.9 to 0.30.10.
- [Changelog](https://github.com/GuillaumeGomez/sysinfo/blob/v0.30.10/CHANGELOG.md)
- [Commits](https://github.com/GuillaumeGomez/sysinfo/compare/v0.30.9...v0.30.10)

---
updated-dependencies:
- dependency-name: sysinfo
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-15 17:24:41 +00:00
89055ef77c feat: oci compliance work (#85)
* chore: rework oci crate to be more composable

* feat: image pull is now internally explicit

* feat: utilize vfs for assembling oci images

* feat: rework oci to preserve permissions via a vfs
2024-04-15 17:24:14 +00:00
24c71e9725 feat: oci packer can now use mksquashfs if available (#70)
* feat: oci packer can now use mksquashfs if available

* fix: use nproc in kernel build script for default jobs, and fix DEV.md guide

* feat: working erofs backend
2024-04-15 00:19:38 +00:00
0a6a112133 feat: basic kratactl top command (#72)
* feat: basic kratactl top command

* fix: use magic bytes 0xff 0xff in idm to improve reliability
2024-04-14 22:32:34 +00:00
1627cbcdd7 feat: idm snooping (#71)
Implement IDM snooping, a new feature that lets you snoop on messages between guests and the host. The feature exposes the IDM packets send and receives
to the API, allowing kratactl to now listen for messages and feed them to a user for debugging purposes.
2024-04-14 11:54:21 +00:00
f8247f13e4 build: use LTO for release builds and strip guestinit (#68)
* initrd: strip guestinit binary before adding it to initramfs

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>

* build: use LTO for release profile artifacts

this allows us to save ~25-30% on binary sizes, at least in guestinit

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>

* revert strip command usage, breaks arm

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>

* build: use strip=symbols

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>

---------

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>
2024-04-13 09:20:24 +00:00
6d07112e3d feat: implement oci image progress (#64)
* feat: oci progress events

* feat: oci progress bars on launch
2024-04-12 18:09:26 +00:00
6cef03bffa debug: common: run programs in a way that is compatible with alpine doas-sudo-shim (#53)
doas sudo shim (as used by Alpine) does not support passing through environment variables
in the same way that sudo does, therefore use `sh -c` instead.

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>
2024-04-12 09:20:34 +00:00
73fd95dbe2 guest: init: default to xterm if TERM is not set (#52)
Most terminal emulators support the xterm control codes more faithfully than the
vt100 ones.

Fixes #51.

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>
2024-04-12 08:52:18 +00:00
f41a1e2168 build: target: use alpine rust triplets when building on alpine (#49)
Signed-off-by: Ariadne Conill <ariadne@ariadne.space>
2024-04-12 08:40:44 +00:00
346cf4a7fa build(deps): bump async-trait from 0.1.79 to 0.1.80 (#48)
Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.79 to 0.1.80.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.79...0.1.80)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-12 07:35:12 +00:00
5e16f3149f feat: guest metrics support (#46)
* feat: initial support for idm send in daemon

* feat: implement IdmClient backend support

* feat: daemon idm now uses IdmClient

* fix: implement channel destruction propagation

* feat: implement request response idm system

* feat: implement metrics support

* proto: move metrics into GuestMetrics for reusability

* fix: log level of guest agent was trace

* feat: metrics tree with process information
2024-04-12 07:34:46 +00:00
ec9060d872 build(deps): bump anyhow from 1.0.81 to 1.0.82 (#42)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.81 to 1.0.82.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.81...1.0.82)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-10 06:19:10 +00:00
6050e99aa7 chore: release (#39)
Co-authored-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
2024-04-09 11:47:58 +00:00
7cfdb27d23 chore(workflows): fix release asset assembly for alpine/debian packages (#40) 2024-04-09 04:45:36 -07:00
93 changed files with 6379 additions and 1466 deletions

View File

@ -6,6 +6,40 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
## [0.0.10](https://github.com/edera-dev/krata/compare/v0.0.9...v0.0.10) - 2024-04-22
### Added
- implement guest exec ([#107](https://github.com/edera-dev/krata/pull/107))
- implement kernel / initrd oci image support ([#103](https://github.com/edera-dev/krata/pull/103))
- idm v2 ([#102](https://github.com/edera-dev/krata/pull/102))
- oci concurrency improvements ([#95](https://github.com/edera-dev/krata/pull/95))
- oci tar format, bit-perfect disk storage for config and manifest, concurrent image pulls ([#88](https://github.com/edera-dev/krata/pull/88))
### Fixed
- oci cache store should fallback to copy when rename won't work ([#96](https://github.com/edera-dev/krata/pull/96))
### Other
- update Cargo.lock dependencies
## [0.0.9](https://github.com/edera-dev/krata/compare/v0.0.8...v0.0.9) - 2024-04-15
### Added
- oci compliance work ([#85](https://github.com/edera-dev/krata/pull/85))
- oci packer can now use mksquashfs if available ([#70](https://github.com/edera-dev/krata/pull/70))
- basic kratactl top command ([#72](https://github.com/edera-dev/krata/pull/72))
- idm snooping ([#71](https://github.com/edera-dev/krata/pull/71))
- implement oci image progress ([#64](https://github.com/edera-dev/krata/pull/64))
- guest metrics support ([#46](https://github.com/edera-dev/krata/pull/46))
### Other
- init: default to xterm if TERM is not set ([#52](https://github.com/edera-dev/krata/pull/52))
- update Cargo.toml dependencies
## [0.0.8](https://github.com/edera-dev/krata/compare/v0.0.7...v0.0.8) - 2024-04-09
### Other
- update Cargo.lock dependencies
## [0.0.7](https://github.com/edera-dev/krata/compare/v0.0.6...v0.0.7) - 2024-04-09 ## [0.0.7](https://github.com/edera-dev/krata/compare/v0.0.6...v0.0.7) - 2024-04-09
### Other ### Other

387
Cargo.lock generated
View File

@ -17,6 +17,18 @@ version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe" checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
[[package]]
name = "ahash"
version = "0.8.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e89da841a80418a9b391ebaea17f5c112ffaaa96f621d2c285b5174da76b9011"
dependencies = [
"cfg-if",
"once_cell",
"version_check",
"zerocopy",
]
[[package]] [[package]]
name = "aho-corasick" name = "aho-corasick"
version = "1.1.3" version = "1.1.3"
@ -26,6 +38,12 @@ dependencies = [
"memchr", "memchr",
] ]
[[package]]
name = "allocator-api2"
version = "0.2.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c6cb57a04249c6480766f7f7cef5467412af1490f8d1e243141daddada3264f"
[[package]] [[package]]
name = "anstream" name = "anstream"
version = "0.6.13" version = "0.6.13"
@ -76,9 +94,9 @@ dependencies = [
[[package]] [[package]]
name = "anyhow" name = "anyhow"
version = "1.0.81" version = "1.0.82"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0952808a6c2afd1aa8947271f3a60f1a6763c7b912d210184c5149b5cf147247" checksum = "f538837af36e6f6a9be0faa67f9a314f8119e4e4b5867c6ab40ed60360142519"
[[package]] [[package]]
name = "arrayvec" name = "arrayvec"
@ -128,9 +146,9 @@ dependencies = [
[[package]] [[package]]
name = "async-trait" name = "async-trait"
version = "0.1.79" version = "0.1.80"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a507401cad91ec6a857ed5513a2073c82a9b9048762b885bb98655b306964681" checksum = "c6fa2087f2753a7da8cc1c0dbfcf89579dd57458e36769de5ac750b4671737ca"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -302,6 +320,21 @@ version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "514de17de45fdb8dc022b1a7975556c53c86f9f0aa5f534b98977b171857c2c9" checksum = "514de17de45fdb8dc022b1a7975556c53c86f9f0aa5f534b98977b171857c2c9"
[[package]]
name = "cassowary"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df8670b8c7b9dae1793364eafadf7239c40d669904660c5960d74cfd80b46a53"
[[package]]
name = "castaway"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a17ed5635fc8536268e5d4de1e22e81ac34419e5f052d4d51f4e01dcc263fcc"
dependencies = [
"rustversion",
]
[[package]] [[package]]
name = "cc" name = "cc"
version = "1.0.90" version = "1.0.90"
@ -421,6 +454,38 @@ dependencies = [
"unicode-width", "unicode-width",
] ]
[[package]]
name = "compact_str"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f86b9c4c00838774a6d902ef931eff7470720c51d90c2e32cfe15dc304737b3f"
dependencies = [
"castaway",
"cfg-if",
"itoa",
"ryu",
"static_assertions",
]
[[package]]
name = "console"
version = "0.15.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0e1f83fc076bd6dd27517eacdf25fef6c4dfe5f1d7448bafaaf3a26f13b5e4eb"
dependencies = [
"encode_unicode",
"lazy_static",
"libc",
"unicode-width",
"windows-sys 0.52.0",
]
[[package]]
name = "core-foundation-sys"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06ea2b9bc92be3c2baa9334a323ebca2d6f074ff852cd1d7b11064035cd3868f"
[[package]] [[package]]
name = "cpufeatures" name = "cpufeatures"
version = "0.2.12" version = "0.2.12"
@ -439,6 +504,31 @@ dependencies = [
"cfg-if", "cfg-if",
] ]
[[package]]
name = "crossbeam-deque"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613f8cc01fe9cf1a3eb3d7f488fd2fa8388403e97039e2f73692932e291a770d"
dependencies = [
"crossbeam-epoch",
"crossbeam-utils",
]
[[package]]
name = "crossbeam-epoch"
version = "0.9.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b82ac4a3c2ca9c3460964f020e1402edd5753411d7737aa39c3714ad1b5420e"
dependencies = [
"crossbeam-utils",
]
[[package]]
name = "crossbeam-utils"
version = "0.8.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "248e3bacc7dc6baa3b21e405ee045c3047101a49145e7e9eca583ab4c2ca5345"
[[package]] [[package]]
name = "crossterm" name = "crossterm"
version = "0.27.0" version = "0.27.0"
@ -447,6 +537,7 @@ checksum = "f476fe445d41c9e991fd07515a6f463074b782242ccf4a5b7b1d1012e70824df"
dependencies = [ dependencies = [
"bitflags 2.5.0", "bitflags 2.5.0",
"crossterm_winapi", "crossterm_winapi",
"futures-core",
"libc", "libc",
"mio", "mio",
"parking_lot", "parking_lot",
@ -662,6 +753,12 @@ version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4445909572dbd556c457c849c4ca58623d84b27c8fff1e74b0b4227d8b90d17b" checksum = "4445909572dbd556c457c849c4ca58623d84b27c8fff1e74b0b4227d8b90d17b"
[[package]]
name = "encode_unicode"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a357d28ed41a50f9c765dbfe56cbc04a64e53e5fc58ba79fbc34c10ef3df831f"
[[package]] [[package]]
name = "env_filter" name = "env_filter"
version = "0.1.0" version = "0.1.0"
@ -710,6 +807,17 @@ dependencies = [
"arrayvec", "arrayvec",
] ]
[[package]]
name = "fancy-duration"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b3ae60718ae501dca9d27fd0e322683c86a95a1a01fac1807aa2f9b035cc0882"
dependencies = [
"anyhow",
"lazy_static",
"regex",
]
[[package]] [[package]]
name = "fastrand" name = "fastrand"
version = "2.0.2" version = "2.0.2"
@ -938,6 +1046,10 @@ name = "hashbrown"
version = "0.14.3" version = "0.14.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "290f1a1d9242c78d09ce40a5e87e7554ee637af1351968159f4952f028f75604" checksum = "290f1a1d9242c78d09ce40a5e87e7554ee637af1351968159f4952f028f75604"
dependencies = [
"ahash",
"allocator-api2",
]
[[package]] [[package]]
name = "heapless" name = "heapless"
@ -1041,6 +1153,12 @@ version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9" checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
[[package]]
name = "human_bytes"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91f255a4535024abf7640cb288260811fc14794f62b063652ed349f9a6c2348e"
[[package]] [[package]]
name = "humantime" name = "humantime"
version = "2.1.0" version = "2.1.0"
@ -1175,6 +1293,34 @@ dependencies = [
"hashbrown 0.14.3", "hashbrown 0.14.3",
] ]
[[package]]
name = "indicatif"
version = "0.17.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "763a5a8f45087d6bcea4222e7b72c291a054edf80e4ef6efd2a4979878c7bea3"
dependencies = [
"console",
"instant",
"number_prefix",
"portable-atomic",
"unicode-width",
]
[[package]]
name = "indoc"
version = "2.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b248f5224d1d606005e02c97f5aa4e88eeb230488bcc03bc9ca4d7991399f2b5"
[[package]]
name = "instant"
version = "0.1.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a5bbe824c507c5da5956355e86a746d82e0e1464f65d862cc5e71da70e94b2c"
dependencies = [
"cfg-if",
]
[[package]] [[package]]
name = "ipnet" name = "ipnet"
version = "2.9.0" version = "2.9.0"
@ -1225,9 +1371,10 @@ dependencies = [
[[package]] [[package]]
name = "krata" name = "krata"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait",
"bytes", "bytes",
"libc", "libc",
"log", "log",
@ -1237,6 +1384,8 @@ dependencies = [
"prost-build", "prost-build",
"prost-reflect", "prost-reflect",
"prost-reflect-build", "prost-reflect-build",
"prost-types",
"scopeguard",
"serde", "serde",
"tokio", "tokio",
"tokio-stream", "tokio-stream",
@ -1259,20 +1408,28 @@ dependencies = [
[[package]] [[package]]
name = "krata-ctl" name = "krata-ctl"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-stream", "async-stream",
"base64 0.22.0",
"clap", "clap",
"comfy-table", "comfy-table",
"crossterm", "crossterm",
"ctrlc", "ctrlc",
"env_logger", "env_logger",
"fancy-duration",
"human_bytes",
"indicatif",
"krata", "krata",
"log", "log",
"prost-reflect", "prost-reflect",
"prost-types",
"ratatui",
"serde",
"serde_json", "serde_json",
"serde_yaml", "serde_yaml",
"termtree",
"tokio", "tokio",
"tokio-stream", "tokio-stream",
"tonic", "tonic",
@ -1281,7 +1438,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-daemon" name = "krata-daemon"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-stream", "async-stream",
@ -1292,10 +1449,13 @@ dependencies = [
"env_logger", "env_logger",
"futures", "futures",
"krata", "krata",
"krata-oci",
"krata-runtime", "krata-runtime",
"krata-tokio-tar",
"log", "log",
"prost", "prost",
"redb", "redb",
"scopeguard",
"signal-hook", "signal-hook",
"tokio", "tokio",
"tokio-stream", "tokio-stream",
@ -1305,7 +1465,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-guest" name = "krata-guest"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"cgroups-rs", "cgroups-rs",
@ -1323,13 +1483,13 @@ dependencies = [
"serde", "serde",
"serde_json", "serde_json",
"sys-mount", "sys-mount",
"sysinfo",
"tokio", "tokio",
"walkdir",
] ]
[[package]] [[package]]
name = "krata-network" name = "krata-network"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@ -1353,7 +1513,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-oci" name = "krata-oci"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-compression", "async-compression",
@ -1361,11 +1521,13 @@ dependencies = [
"backhand", "backhand",
"bytes", "bytes",
"env_logger", "env_logger",
"indexmap 2.2.6",
"krata-tokio-tar", "krata-tokio-tar",
"log", "log",
"oci-spec", "oci-spec",
"path-clean", "path-clean",
"reqwest", "reqwest",
"scopeguard",
"serde", "serde",
"serde_json", "serde_json",
"sha256", "sha256",
@ -1378,7 +1540,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-runtime" name = "krata-runtime"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"backhand", "backhand",
@ -1415,7 +1577,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xencall" name = "krata-xencall"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"env_logger", "env_logger",
"libc", "libc",
@ -1428,7 +1590,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xenclient" name = "krata-xenclient"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"elf", "elf",
@ -1449,7 +1611,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xenevtchn" name = "krata-xenevtchn"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"libc", "libc",
"log", "log",
@ -1460,7 +1622,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xengnt" name = "krata-xengnt"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"libc", "libc",
"nix 0.28.0", "nix 0.28.0",
@ -1469,7 +1631,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xenstore" name = "krata-xenstore"
version = "0.0.7" version = "0.0.10"
dependencies = [ dependencies = [
"byteorder", "byteorder",
"env_logger", "env_logger",
@ -1540,6 +1702,15 @@ dependencies = [
"libc", "libc",
] ]
[[package]]
name = "lru"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3262e75e648fce39813cb56ac41f3c3e3f65217ebf3844d818d1f9398cfb0dc"
dependencies = [
"hashbrown 0.14.3",
]
[[package]] [[package]]
name = "lzma-sys" name = "lzma-sys"
version = "0.1.20" version = "0.1.20"
@ -1718,6 +1889,15 @@ dependencies = [
"minimal-lexical", "minimal-lexical",
] ]
[[package]]
name = "ntapi"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8a3895c6391c39d7fe7ebc444a87eb2991b2a0bc718fdabd071eec617fc68e4"
dependencies = [
"winapi",
]
[[package]] [[package]]
name = "num-traits" name = "num-traits"
version = "0.2.18" version = "0.2.18"
@ -1737,6 +1917,12 @@ dependencies = [
"libc", "libc",
] ]
[[package]]
name = "number_prefix"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "830b246a0e5f20af87141b25c173cd1b609bd7779a4617d6ec582abaf90870f3"
[[package]] [[package]]
name = "object" name = "object"
version = "0.32.2" version = "0.32.2"
@ -1881,6 +2067,12 @@ version = "0.3.30"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d231b230927b5e4ad203db57bbcbee2802f6bce620b1e4a9024a07d94e2907ec" checksum = "d231b230927b5e4ad203db57bbcbee2802f6bce620b1e4a9024a07d94e2907ec"
[[package]]
name = "portable-atomic"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7170ef9988bc169ba16dd36a7fa041e5c4cbeb6a35b76d4c03daded371eae7c0"
[[package]] [[package]]
name = "ppv-lite86" name = "ppv-lite86"
version = "0.2.17" version = "0.2.17"
@ -2075,10 +2267,50 @@ dependencies = [
] ]
[[package]] [[package]]
name = "redb" name = "ratatui"
version = "2.0.0" version = "0.26.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1100a056c5dcdd4e5513d5333385223b26ef1bf92f31eb38f407e8c20549256" checksum = "a564a852040e82671dc50a37d88f3aa83bbc690dfc6844cfe7a2591620206a80"
dependencies = [
"bitflags 2.5.0",
"cassowary",
"compact_str",
"crossterm",
"indoc",
"itertools",
"lru",
"paste",
"stability",
"strum",
"unicode-segmentation",
"unicode-width",
]
[[package]]
name = "rayon"
version = "1.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b418a60154510ca1a002a752ca9714984e21e4241e804d32555251faf8b78ffa"
dependencies = [
"either",
"rayon-core",
]
[[package]]
name = "rayon-core"
version = "1.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1465873a3dfdaa8ae7cb14b4383657caab0b3e8a0aa9ae8e04b044854c8dfce2"
dependencies = [
"crossbeam-deque",
"crossbeam-utils",
]
[[package]]
name = "redb"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed7508e692a49b6b2290b56540384ccae9b1fb4d77065640b165835b56ffe3bb"
dependencies = [ dependencies = [
"libc", "libc",
] ]
@ -2132,9 +2364,9 @@ checksum = "adad44e29e4c806119491a7f06f03de4d1af22c3a680dd47f1e6e179439d1f56"
[[package]] [[package]]
name = "reqwest" name = "reqwest"
version = "0.12.3" version = "0.12.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3e6cc1e89e689536eb5aeede61520e874df5a4707df811cd5da4aa5fbb2aae19" checksum = "566cafdd92868e0939d3fb961bd0dc25fcfaaed179291093b3d43e6b3150ea10"
dependencies = [ dependencies = [
"base64 0.22.0", "base64 0.22.0",
"bytes", "bytes",
@ -2231,9 +2463,9 @@ dependencies = [
[[package]] [[package]]
name = "rustls" name = "rustls"
version = "0.22.3" version = "0.22.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "99008d7ad0bbbea527ec27bddbc0e432c5b87d8175178cee68d2eec9c4a1813c" checksum = "bf4ef73721ac7bcd79b2b315da7779d8fc09718c6b3d2d1b2d94850eb8c18432"
dependencies = [ dependencies = [
"log", "log",
"ring", "ring",
@ -2299,9 +2531,9 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.197" version = "1.0.198"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fb1c873e1b9b056a4dc4c0c198b24c3ffa059243875552b2bd0933b1aee4ce2" checksum = "9846a40c979031340571da2545a4e5b7c4163bdae79b301d5f86d03979451fcc"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
@ -2318,9 +2550,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.197" version = "1.0.198"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7eb0b34b42edc17f6b7cac84a52a1c5f0e1bb2227e997ca9011ea3dd34e8610b" checksum = "e88edab869b01783ba905e7d0153f9fc1a6505a96e4ad3018011eedb838566d9"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -2329,9 +2561,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.115" version = "1.0.116"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "12dc5c46daa8e9fdf4f5e71b6cf9a53f2487da0e86e55808e2d35539666497dd" checksum = "3e17db7126d17feb94eb3fad46bf1a96b034e8aacbc2e775fe81505f8b0b2813"
dependencies = [ dependencies = [
"itoa", "itoa",
"ryu", "ryu",
@ -2487,12 +2719,28 @@ version = "0.9.8"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67" checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67"
[[package]]
name = "stability"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2ff9eaf853dec4c8802325d8b6d3dffa86cc707fd7a1a4cdbf416e13b061787a"
dependencies = [
"quote",
"syn 2.0.57",
]
[[package]] [[package]]
name = "stable_deref_trait" name = "stable_deref_trait"
version = "1.2.0" version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8f112729512f8e442d81f95a8a7ddf2b7c6b8a1a6f509a95864142b30cab2d3" checksum = "a8f112729512f8e442d81f95a8a7ddf2b7c6b8a1a6f509a95864142b30cab2d3"
[[package]]
name = "static_assertions"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
[[package]] [[package]]
name = "strsim" name = "strsim"
version = "0.10.0" version = "0.10.0"
@ -2510,6 +2758,9 @@ name = "strum"
version = "0.26.2" version = "0.26.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d8cec3501a5194c432b2b7976db6b7d10ec95c253208b45f83f7136aa985e29" checksum = "5d8cec3501a5194c432b2b7976db6b7d10ec95c253208b45f83f7136aa985e29"
dependencies = [
"strum_macros",
]
[[package]] [[package]]
name = "strum_macros" name = "strum_macros"
@ -2571,6 +2822,21 @@ dependencies = [
"tracing", "tracing",
] ]
[[package]]
name = "sysinfo"
version = "0.30.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87341a165d73787554941cd5ef55ad728011566fe714e987d1b976c15dbc3a83"
dependencies = [
"cfg-if",
"core-foundation-sys",
"libc",
"ntapi",
"once_cell",
"rayon",
"windows",
]
[[package]] [[package]]
name = "tap" name = "tap"
version = "1.0.1" version = "1.0.1"
@ -2590,19 +2856,25 @@ dependencies = [
] ]
[[package]] [[package]]
name = "thiserror" name = "termtree"
version = "1.0.58" version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03468839009160513471e86a034bb2c5c0e4baae3b43f79ffc55c4a5427b3297" checksum = "3369f5ac52d5eb6ab48c6b4ffdc8efbcad6b89c765749064ba298f2c68a16a76"
[[package]]
name = "thiserror"
version = "1.0.59"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0126ad08bff79f29fc3ae6a55cc72352056dfff61e3ff8bb7129476d44b23aa"
dependencies = [ dependencies = [
"thiserror-impl", "thiserror-impl",
] ]
[[package]] [[package]]
name = "thiserror-impl" name = "thiserror-impl"
version = "1.0.58" version = "1.0.59"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c61f3ba182994efc43764a46c018c347bc492c79f024e705f46567b418f6d4f7" checksum = "d1cd413b5d558b4c5bf3680e324a6fa5014e7b7c067a51e69dbdf47eb7148b66"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -2880,6 +3152,12 @@ dependencies = [
"tinyvec", "tinyvec",
] ]
[[package]]
name = "unicode-segmentation"
version = "1.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4c87d22b6e3f4a18d4d40ef354e97c90fcb14dd91d7dc0aa9d8a1172ebf7202"
[[package]] [[package]]
name = "unicode-width" name = "unicode-width"
version = "0.1.11" version = "0.1.11"
@ -3071,6 +3349,25 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows"
version = "0.52.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e48a53791691ab099e5e2ad123536d0fff50652600abaf43bbf952894110d0be"
dependencies = [
"windows-core",
"windows-targets 0.52.4",
]
[[package]]
name = "windows-core"
version = "0.52.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "33ab640c8d7e35bf8ba19b884ba838ceb4fba93a4e8c65a9059d08afcfc683d9"
dependencies = [
"windows-targets 0.52.4",
]
[[package]] [[package]]
name = "windows-sys" name = "windows-sys"
version = "0.48.0" version = "0.48.0"
@ -3251,6 +3548,26 @@ dependencies = [
"lzma-sys", "lzma-sys",
] ]
[[package]]
name = "zerocopy"
version = "0.7.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "74d4d3961e53fa4c9a25a8637fc2bfaf2595b3d3ae34875568a5cf64787716be"
dependencies = [
"zerocopy-derive",
]
[[package]]
name = "zerocopy-derive"
version = "0.7.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ce1b18ccd8e73a9321186f97e46f9f04b778851177567b1975109d26a08d2a6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
]
[[package]] [[package]]
name = "zeroize" name = "zeroize"
version = "1.7.0" version = "1.7.0"

View File

@ -16,7 +16,7 @@ members = [
resolver = "2" resolver = "2"
[workspace.package] [workspace.package]
version = "0.0.7" version = "0.0.10"
homepage = "https://krata.dev" homepage = "https://krata.dev"
license = "Apache-2.0" license = "Apache-2.0"
repository = "https://github.com/edera-dev/krata" repository = "https://github.com/edera-dev/krata"
@ -26,8 +26,9 @@ anyhow = "1.0"
arrayvec = "0.7.4" arrayvec = "0.7.4"
async-compression = "0.4.8" async-compression = "0.4.8"
async-stream = "0.3.5" async-stream = "0.3.5"
async-trait = "0.1.77" async-trait = "0.1.80"
backhand = "0.15.0" backhand = "0.15.0"
base64 = "0.22.0"
byteorder = "1" byteorder = "1"
bytes = "1.5.0" bytes = "1.5.0"
cgroups-rs = "0.3.4" cgroups-rs = "0.3.4"
@ -38,8 +39,12 @@ ctrlc = "3.4.4"
elf = "0.7.4" elf = "0.7.4"
env_logger = "0.11.0" env_logger = "0.11.0"
etherparse = "0.14.3" etherparse = "0.14.3"
fancy-duration = "0.9.2"
flate2 = "1.0" flate2 = "1.0"
futures = "0.3.30" futures = "0.3.30"
human_bytes = "0.4"
indexmap = "2.2.6"
indicatif = "0.17.8"
ipnetwork = "0.20.0" ipnetwork = "0.20.0"
libc = "0.2" libc = "0.2"
log = "0.4.20" log = "0.4.20"
@ -55,15 +60,20 @@ path-clean = "1.0.1"
prost = "0.12.4" prost = "0.12.4"
prost-build = "0.12.4" prost-build = "0.12.4"
prost-reflect-build = "0.13.0" prost-reflect-build = "0.13.0"
prost-types = "0.12.4"
rand = "0.8.5" rand = "0.8.5"
redb = "2.0.0" ratatui = "0.26.2"
redb = "2.1.0"
rtnetlink = "0.14.1" rtnetlink = "0.14.1"
serde_json = "1.0.113" scopeguard = "1.2.0"
serde_json = "1.0.116"
serde_yaml = "0.9" serde_yaml = "0.9"
sha256 = "1.5.0" sha256 = "1.5.0"
signal-hook = "0.3.17" signal-hook = "0.3.17"
slice-copy = "0.3.0" slice-copy = "0.3.0"
smoltcp = "0.11.0" smoltcp = "0.11.0"
sysinfo = "0.30.11"
termtree = "0.4.1"
thiserror = "1.0" thiserror = "1.0"
tokio-tun = "0.11.4" tokio-tun = "0.11.4"
tonic-build = "0.11.0" tonic-build = "0.11.0"
@ -82,12 +92,12 @@ version = "0.13.1"
features = ["derive"] features = ["derive"]
[workspace.dependencies.reqwest] [workspace.dependencies.reqwest]
version = "0.12.3" version = "0.12.4"
default-features = false default-features = false
features = ["rustls-tls"] features = ["rustls-tls"]
[workspace.dependencies.serde] [workspace.dependencies.serde]
version = "1.0.196" version = "1.0.198"
features = ["derive"] features = ["derive"]
[workspace.dependencies.sys-mount] [workspace.dependencies.sys-mount]
@ -109,3 +119,7 @@ features = ["tls"]
[workspace.dependencies.uuid] [workspace.dependencies.uuid]
version = "1.6.1" version = "1.6.1"
features = ["v4"] features = ["v4"]
[profile.release]
lto = "fat"
strip = "symbols"

19
DEV.md
View File

@ -28,10 +28,21 @@ it's corresponding code path from the above table.
1. Install the specified Debian version on a x86_64 host _capable_ of KVM (NOTE: KVM is not used, Xen is a type-1 hypervisor). 1. Install the specified Debian version on a x86_64 host _capable_ of KVM (NOTE: KVM is not used, Xen is a type-1 hypervisor).
2. Install required packages: `apt install git xen-system-amd64 flex bison libelf-dev libssl-dev bc` 2. Install required packages:
```sh
$ apt install git xen-system-amd64 build-essential libclang-dev musl-tools flex bison libelf-dev libssl-dev bc protobuf-compiler libprotobuf-dev squashfs-tools erofs-utils
```
3. Install [rustup](https://rustup.rs) for managing a Rust environment. 3. Install [rustup](https://rustup.rs) for managing a Rust environment.
Make sure to install the targets that you need for krata:
```sh
$ rustup target add x86_64-unknown-linux-gnu
$ rustup target add x86_64-unknown-linux-musl
```
4. Configure `/etc/default/grub.d/xen.cfg` to give krata guests some room: 4. Configure `/etc/default/grub.d/xen.cfg` to give krata guests some room:
```sh ```sh
@ -43,7 +54,7 @@ After changing the grub config, update grub: `update-grub`
Then reboot to boot the system as a Xen dom0. Then reboot to boot the system as a Xen dom0.
You can validate that Xen is setup by running `xl info` and ensuring it returns useful information about the Xen hypervisor. You can validate that Xen is setup by running `dmesg | grep "Hypervisor detected"` and ensuring it returns a line like `Hypervisor detected: Xen PV`, if that is missing, the host is not running under Xen.
5. Clone the krata source code: 5. Clone the krata source code:
```sh ```sh
@ -58,8 +69,8 @@ $ ./hack/kernel/build.sh
``` ```
7. Copy the guest kernel image at `target/kernel/kernel-x86_64` to `/var/lib/krata/guest/kernel` to have it automatically detected by kratad. 7. Copy the guest kernel image at `target/kernel/kernel-x86_64` to `/var/lib/krata/guest/kernel` to have it automatically detected by kratad.
8. Launch `./hack/debug/kratanet.sh` and keep it running in the foreground. 8. Launch `./hack/debug/kratad.sh` and keep it running in the foreground.
9. Launch `./hack/debug/kratad.sh` and keep it running in the foreground. 9. Launch `./hack/debug/kratanet.sh` and keep it running in the foreground.
10. Run kratactl to launch a guest: 10. Run kratactl to launch a guest:
```sh ```sh

2
FAQ.md
View File

@ -2,7 +2,7 @@
## How does krata currently work? ## How does krata currently work?
The krata hypervisor makes it possible to launch OCI containers on a Xen hypervisor without utilizing the Xen userspace tooling. krata contains just enough of the userspace of Xen (reimplemented in Rust) to start an x86_64 Xen Linux PV guest, and implements a Linux init process that can boot an OCI container. It does so by converting an OCI image into a squashfs file and packaging basic startup data in a bundle which the init container can read. The krata hypervisor makes it possible to launch OCI containers on a Xen hypervisor without utilizing the Xen userspace tooling. krata contains just enough of the userspace of Xen (reimplemented in Rust) to start an x86_64 Xen Linux PV guest, and implements a Linux init process that can boot an OCI container. It does so by converting an OCI image into a squashfs/erofs file and packaging basic startup data in a bundle which the init container can read.
In addition, due to the desire to reduce dependence on the dom0 network, krata contains a networking daemon called kratanet. kratanet listens for krata guests to startup and launches a userspace networking environment. krata guests can access the dom0 networking stack via the proxynat layer that makes it possible to communicate over UDP, TCP, and ICMP (echo only) to the outside world. In addition, each krata guest is provided a "gateway" IP (both in IPv4 and IPv6) which utilizes smoltcp to provide a virtual host. That virtual host in the future could dial connections into the container to access container networking resources. In addition, due to the desire to reduce dependence on the dom0 network, krata contains a networking daemon called kratanet. kratanet listens for krata guests to startup and launches a userspace networking environment. krata guests can access the dom0 networking stack via the proxynat layer that makes it possible to communicate over UDP, TCP, and ICMP (echo only) to the outside world. In addition, each krata guest is provided a "gateway" IP (both in IPv4 and IPv6) which utilizes smoltcp to provide a virtual host. That virtual host in the future could dial connections into the container to access container networking resources.

View File

@ -11,16 +11,24 @@ resolver = "2"
[dependencies] [dependencies]
anyhow = { workspace = true } anyhow = { workspace = true }
async-stream = { workspace = true } async-stream = { workspace = true }
base64 = { workspace = true }
clap = { workspace = true } clap = { workspace = true }
comfy-table = { workspace = true } comfy-table = { workspace = true }
crossterm = { workspace = true } crossterm = { workspace = true, features = ["event-stream"] }
ctrlc = { workspace = true, features = ["termination"] } ctrlc = { workspace = true, features = ["termination"] }
env_logger = { workspace = true } env_logger = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" } fancy-duration = { workspace = true }
human_bytes = { workspace = true }
indicatif = { workspace = true }
krata = { path = "../krata", version = "^0.0.10" }
log = { workspace = true } log = { workspace = true }
prost-reflect = { workspace = true, features = ["serde"] } prost-reflect = { workspace = true, features = ["serde"] }
prost-types = { workspace = true }
ratatui = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
serde_yaml = { workspace = true } serde_yaml = { workspace = true }
termtree = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
tokio-stream = { workspace = true } tokio-stream = { workspace = true }
tonic = { workspace = true } tonic = { workspace = true }

View File

@ -52,34 +52,31 @@ impl DestroyCommand {
async fn wait_guest_destroyed(id: &str, events: EventStream) -> Result<()> { async fn wait_guest_destroyed(id: &str, events: EventStream) -> Result<()> {
let mut stream = events.subscribe(); let mut stream = events.subscribe();
while let Ok(event) = stream.recv().await { while let Ok(event) = stream.recv().await {
match event { let Event::GuestChanged(changed) = event;
Event::GuestChanged(changed) => { let Some(guest) = changed.guest else {
let Some(guest) = changed.guest else { continue;
continue; };
};
if guest.id != id { if guest.id != id {
continue; continue;
} }
let Some(state) = guest.state else { let Some(state) = guest.state else {
continue; continue;
}; };
if let Some(ref error) = state.error_info { if let Some(ref error) = state.error_info {
if state.status() == GuestStatus::Failed { if state.status() == GuestStatus::Failed {
error!("destroy failed: {}", error.message); error!("destroy failed: {}", error.message);
std::process::exit(1); std::process::exit(1);
} else { } else {
error!("guest error: {}", error.message); error!("guest error: {}", error.message);
}
}
if state.status() == GuestStatus::Destroyed {
std::process::exit(0);
}
} }
} }
if state.status() == GuestStatus::Destroyed {
std::process::exit(0);
}
} }
Ok(()) Ok(())
} }

View File

@ -0,0 +1,70 @@
use std::collections::HashMap;
use anyhow::Result;
use clap::Parser;
use krata::v1::{
common::{GuestTaskSpec, GuestTaskSpecEnvVar},
control::{control_service_client::ControlServiceClient, ExecGuestRequest},
};
use tonic::{transport::Channel, Request};
use crate::console::StdioConsoleStream;
use super::resolve_guest;
#[derive(Parser)]
#[command(about = "Execute a command inside the guest")]
pub struct ExecCommand {
#[arg[short, long, help = "Environment variables"]]
env: Option<Vec<String>>,
#[arg(short = 'w', long, help = "Working directory")]
working_directory: Option<String>,
#[arg(help = "Guest to exec inside, either the name or the uuid")]
guest: String,
#[arg(
allow_hyphen_values = true,
trailing_var_arg = true,
help = "Command to run inside the guest"
)]
command: Vec<String>,
}
impl ExecCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let guest_id: String = resolve_guest(&mut client, &self.guest).await?;
let initial = ExecGuestRequest {
guest_id,
task: Some(GuestTaskSpec {
environment: env_map(&self.env.unwrap_or_default())
.iter()
.map(|(key, value)| GuestTaskSpecEnvVar {
key: key.clone(),
value: value.clone(),
})
.collect(),
command: self.command,
working_directory: self.working_directory.unwrap_or_default(),
}),
data: vec![],
};
let stream = StdioConsoleStream::stdin_stream_exec(initial).await;
let response = client.exec_guest(Request::new(stream)).await?.into_inner();
let code = StdioConsoleStream::exec_output(response).await?;
std::process::exit(code);
}
}
fn env_map(env: &[String]) -> HashMap<String, String> {
let mut map = HashMap::<String, String>::new();
for item in env {
if let Some((key, value)) = item.split_once('=') {
map.insert(key.to_string(), value.to_string());
}
}
map
}

View File

@ -0,0 +1,22 @@
use anyhow::Result;
use clap::Parser;
use krata::v1::control::{control_service_client::ControlServiceClient, IdentifyHostRequest};
use tonic::{transport::Channel, Request};
#[derive(Parser)]
#[command(about = "Identify information about the host")]
pub struct IdentifyHostCommand {}
impl IdentifyHostCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let response = client
.identify_host(Request::new(IdentifyHostRequest {}))
.await?
.into_inner();
println!("Host UUID: {}", response.host_uuid);
println!("Host Domain: {}", response.host_domid);
println!("Krata Version: {}", response.krata_version);
Ok(())
}
}

View File

@ -0,0 +1,157 @@
use anyhow::Result;
use base64::Engine;
use clap::{Parser, ValueEnum};
use krata::{
events::EventStream,
idm::{internal, serialize::IdmSerializable, transport::IdmTransportPacketForm},
v1::control::{control_service_client::ControlServiceClient, SnoopIdmReply, SnoopIdmRequest},
};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use tokio_stream::StreamExt;
use tonic::transport::Channel;
use crate::format::{kv2line, proto2dynamic, value2kv};
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum IdmSnoopFormat {
Simple,
Jsonl,
KeyValue,
}
#[derive(Parser)]
#[command(about = "Snoop on the IDM bus")]
pub struct IdmSnoopCommand {
#[arg(short, long, default_value = "simple", help = "Output format")]
format: IdmSnoopFormat,
}
impl IdmSnoopCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
_events: EventStream,
) -> Result<()> {
let mut stream = client.snoop_idm(SnoopIdmRequest {}).await?.into_inner();
while let Some(reply) = stream.next().await {
let reply = reply?;
let Some(line) = convert_idm_snoop(reply) else {
continue;
};
match self.format {
IdmSnoopFormat::Simple => {
self.print_simple(line)?;
}
IdmSnoopFormat::Jsonl => {
let encoded = serde_json::to_string(&line)?;
println!("{}", encoded.trim());
}
IdmSnoopFormat::KeyValue => {
self.print_key_value(line)?;
}
}
}
Ok(())
}
fn print_simple(&self, line: IdmSnoopLine) -> Result<()> {
let encoded = if !line.packet.decoded.is_null() {
serde_json::to_string(&line.packet.decoded)?
} else {
base64::prelude::BASE64_STANDARD.encode(&line.packet.data)
};
println!(
"({} -> {}) {} {} {}",
line.from, line.to, line.packet.id, line.packet.form, encoded
);
Ok(())
}
fn print_key_value(&self, line: IdmSnoopLine) -> Result<()> {
let kvs = value2kv(serde_json::to_value(line)?)?;
println!("{}", kv2line(kvs));
Ok(())
}
}
#[derive(Serialize, Deserialize)]
pub struct IdmSnoopLine {
pub from: String,
pub to: String,
pub packet: IdmSnoopData,
}
#[derive(Serialize, Deserialize)]
pub struct IdmSnoopData {
pub id: u64,
pub channel: u64,
pub form: String,
pub data: String,
pub decoded: Value,
}
pub fn convert_idm_snoop(reply: SnoopIdmReply) -> Option<IdmSnoopLine> {
let packet = &(reply.packet?);
let decoded = if packet.channel == 0 {
match packet.form() {
IdmTransportPacketForm::Event => internal::Event::decode(&packet.data)
.ok()
.and_then(|event| proto2dynamic(event).ok()),
IdmTransportPacketForm::Request
| IdmTransportPacketForm::StreamRequest
| IdmTransportPacketForm::StreamRequestUpdate => {
internal::Request::decode(&packet.data)
.ok()
.and_then(|event| proto2dynamic(event).ok())
}
IdmTransportPacketForm::Response | IdmTransportPacketForm::StreamResponseUpdate => {
internal::Response::decode(&packet.data)
.ok()
.and_then(|event| proto2dynamic(event).ok())
}
_ => None,
}
} else {
None
};
let decoded = decoded
.and_then(|message| serde_json::to_value(message).ok())
.unwrap_or(Value::Null);
let data = IdmSnoopData {
id: packet.id,
channel: packet.channel,
form: match packet.form() {
IdmTransportPacketForm::Raw => "raw".to_string(),
IdmTransportPacketForm::Event => "event".to_string(),
IdmTransportPacketForm::Request => "request".to_string(),
IdmTransportPacketForm::Response => "response".to_string(),
IdmTransportPacketForm::StreamRequest => "stream-request".to_string(),
IdmTransportPacketForm::StreamRequestUpdate => "stream-request-update".to_string(),
IdmTransportPacketForm::StreamRequestClosed => "stream-request-closed".to_string(),
IdmTransportPacketForm::StreamResponseUpdate => "stream-response-update".to_string(),
IdmTransportPacketForm::StreamResponseClosed => "stream-response-closed".to_string(),
_ => format!("unknown-{}", packet.form),
},
data: base64::prelude::BASE64_STANDARD.encode(&packet.data),
decoded,
};
Some(IdmSnoopLine {
from: reply.from,
to: reply.to,
packet: data,
})
}

View File

@ -1,17 +1,17 @@
use std::collections::HashMap; use std::collections::HashMap;
use anyhow::Result; use anyhow::Result;
use clap::Parser; use clap::{Parser, ValueEnum};
use krata::{ use krata::{
events::EventStream, events::EventStream,
v1::{ v1::{
common::{ common::{
guest_image_spec::Image, GuestImageSpec, GuestOciImageSpec, GuestSpec, GuestStatus, guest_image_spec::Image, GuestImageSpec, GuestOciImageSpec, GuestSpec, GuestStatus,
GuestTaskSpec, GuestTaskSpecEnvVar, GuestTaskSpec, GuestTaskSpecEnvVar, OciImageFormat,
}, },
control::{ control::{
control_service_client::ControlServiceClient, watch_events_reply::Event, control_service_client::ControlServiceClient, watch_events_reply::Event,
CreateGuestRequest, CreateGuestRequest, PullImageRequest,
}, },
}, },
}; };
@ -19,11 +19,21 @@ use log::error;
use tokio::select; use tokio::select;
use tonic::{transport::Channel, Request}; use tonic::{transport::Channel, Request};
use crate::console::StdioConsoleStream; use crate::{console::StdioConsoleStream, pull::pull_interactive_progress};
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
pub enum LaunchImageFormat {
Squashfs,
Erofs,
}
#[derive(Parser)] #[derive(Parser)]
#[command(about = "Launch a new guest")] #[command(about = "Launch a new guest")]
pub struct LauchCommand { pub struct LaunchCommand {
#[arg(long, default_value = "squashfs", help = "Image format")]
image_format: LaunchImageFormat,
#[arg(long, help = "Overwrite image cache on pull")]
pull_overwrite_cache: bool,
#[arg(short, long, help = "Name of the guest")] #[arg(short, long, help = "Name of the guest")]
name: Option<String>, name: Option<String>,
#[arg( #[arg(
@ -54,6 +64,12 @@ pub struct LauchCommand {
help = "Wait for the guest to start, implied by --attach" help = "Wait for the guest to start, implied by --attach"
)] )]
wait: bool, wait: bool,
#[arg(short = 'k', long, help = "OCI kernel image for guest to use")]
kernel: Option<String>,
#[arg(short = 'I', long, help = "OCI initrd image for guest to use")]
initrd: Option<String>,
#[arg(short = 'w', long, help = "Working directory")]
working_directory: Option<String>,
#[arg(help = "Container image for guest to use")] #[arg(help = "Container image for guest to use")]
oci: String, oci: String,
#[arg( #[arg(
@ -64,18 +80,47 @@ pub struct LauchCommand {
command: Vec<String>, command: Vec<String>,
} }
impl LauchCommand { impl LaunchCommand {
pub async fn run( pub async fn run(
self, self,
mut client: ControlServiceClient<Channel>, mut client: ControlServiceClient<Channel>,
events: EventStream, events: EventStream,
) -> Result<()> { ) -> Result<()> {
let image = self
.pull_image(
&mut client,
&self.oci,
match self.image_format {
LaunchImageFormat::Squashfs => OciImageFormat::Squashfs,
LaunchImageFormat::Erofs => OciImageFormat::Erofs,
},
)
.await?;
let kernel = if let Some(ref kernel) = self.kernel {
let kernel_image = self
.pull_image(&mut client, kernel, OciImageFormat::Tar)
.await?;
Some(kernel_image)
} else {
None
};
let initrd = if let Some(ref initrd) = self.initrd {
let kernel_image = self
.pull_image(&mut client, initrd, OciImageFormat::Tar)
.await?;
Some(kernel_image)
} else {
None
};
let request = CreateGuestRequest { let request = CreateGuestRequest {
spec: Some(GuestSpec { spec: Some(GuestSpec {
name: self.name.unwrap_or_default(), name: self.name.unwrap_or_default(),
image: Some(GuestImageSpec { image: Some(image),
image: Some(Image::Oci(GuestOciImageSpec { image: self.oci })), kernel,
}), initrd,
vcpus: self.cpus, vcpus: self.cpus,
mem: self.mem, mem: self.mem,
task: Some(GuestTaskSpec { task: Some(GuestTaskSpec {
@ -87,6 +132,7 @@ impl LauchCommand {
}) })
.collect(), .collect(),
command: self.command, command: self.command,
working_directory: self.working_directory.unwrap_or_default(),
}), }),
annotations: vec![], annotations: vec![],
}), }),
@ -121,6 +167,28 @@ impl LauchCommand {
StdioConsoleStream::restore_terminal_mode(); StdioConsoleStream::restore_terminal_mode();
std::process::exit(code.unwrap_or(0)); std::process::exit(code.unwrap_or(0));
} }
async fn pull_image(
&self,
client: &mut ControlServiceClient<Channel>,
image: &str,
format: OciImageFormat,
) -> Result<GuestImageSpec> {
let response = client
.pull_image(PullImageRequest {
image: image.to_string(),
format: format.into(),
overwrite_cache: self.pull_overwrite_cache,
})
.await?;
let reply = pull_interactive_progress(response.into_inner()).await?;
Ok(GuestImageSpec {
image: Some(Image::Oci(GuestOciImageSpec {
digest: reply.digest,
format: reply.format,
})),
})
}
} }
async fn wait_guest_started(id: &str, events: EventStream) -> Result<()> { async fn wait_guest_started(id: &str, events: EventStream) -> Result<()> {

View File

@ -0,0 +1,83 @@
use anyhow::Result;
use clap::{Parser, ValueEnum};
use krata::{
events::EventStream,
v1::{
common::GuestMetricNode,
control::{control_service_client::ControlServiceClient, ReadGuestMetricsRequest},
},
};
use tonic::transport::Channel;
use crate::format::{kv2line, metrics_flat, metrics_tree, proto2dynamic};
use super::resolve_guest;
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum MetricsFormat {
Tree,
Json,
JsonPretty,
Yaml,
KeyValue,
}
#[derive(Parser)]
#[command(about = "Read metrics from the guest")]
pub struct MetricsCommand {
#[arg(short, long, default_value = "tree", help = "Output format")]
format: MetricsFormat,
#[arg(help = "Guest to read metrics for, either the name or the uuid")]
guest: String,
}
impl MetricsCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
_events: EventStream,
) -> Result<()> {
let guest_id: String = resolve_guest(&mut client, &self.guest).await?;
let root = client
.read_guest_metrics(ReadGuestMetricsRequest { guest_id })
.await?
.into_inner()
.root
.unwrap_or_default();
match self.format {
MetricsFormat::Tree => {
self.print_metrics_tree(root)?;
}
MetricsFormat::Json | MetricsFormat::JsonPretty | MetricsFormat::Yaml => {
let value = serde_json::to_value(proto2dynamic(root)?)?;
let encoded = if self.format == MetricsFormat::JsonPretty {
serde_json::to_string_pretty(&value)?
} else if self.format == MetricsFormat::Yaml {
serde_yaml::to_string(&value)?
} else {
serde_json::to_string(&value)?
};
println!("{}", encoded.trim());
}
MetricsFormat::KeyValue => {
self.print_key_value(root)?;
}
}
Ok(())
}
fn print_metrics_tree(&self, root: GuestMetricNode) -> Result<()> {
print!("{}", metrics_tree(root));
Ok(())
}
fn print_key_value(&self, metrics: GuestMetricNode) -> Result<()> {
let kvs = metrics_flat(metrics);
println!("{}", kv2line(kvs));
Ok(())
}
}

View File

@ -1,9 +1,15 @@
pub mod attach; pub mod attach;
pub mod destroy; pub mod destroy;
pub mod exec;
pub mod identify_host;
pub mod idm_snoop;
pub mod launch; pub mod launch;
pub mod list; pub mod list;
pub mod logs; pub mod logs;
pub mod metrics;
pub mod pull;
pub mod resolve; pub mod resolve;
pub mod top;
pub mod watch; pub mod watch;
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Result};
@ -16,8 +22,10 @@ use krata::{
use tonic::{transport::Channel, Request}; use tonic::{transport::Channel, Request};
use self::{ use self::{
attach::AttachCommand, destroy::DestroyCommand, launch::LauchCommand, list::ListCommand, attach::AttachCommand, destroy::DestroyCommand, exec::ExecCommand,
logs::LogsCommand, resolve::ResolveCommand, watch::WatchCommand, identify_host::IdentifyHostCommand, idm_snoop::IdmSnoopCommand, launch::LaunchCommand,
list::ListCommand, logs::LogsCommand, metrics::MetricsCommand, pull::PullCommand,
resolve::ResolveCommand, top::TopCommand, watch::WatchCommand,
}; };
#[derive(Parser)] #[derive(Parser)]
@ -40,13 +48,19 @@ pub struct ControlCommand {
#[derive(Subcommand)] #[derive(Subcommand)]
pub enum Commands { pub enum Commands {
Launch(LauchCommand), Launch(LaunchCommand),
Destroy(DestroyCommand), Destroy(DestroyCommand),
List(ListCommand), List(ListCommand),
Attach(AttachCommand), Attach(AttachCommand),
Pull(PullCommand),
Logs(LogsCommand), Logs(LogsCommand),
Watch(WatchCommand), Watch(WatchCommand),
Resolve(ResolveCommand), Resolve(ResolveCommand),
Metrics(MetricsCommand),
IdmSnoop(IdmSnoopCommand),
Top(TopCommand),
IdentifyHost(IdentifyHostCommand),
Exec(ExecCommand),
} }
impl ControlCommand { impl ControlCommand {
@ -82,6 +96,30 @@ impl ControlCommand {
Commands::Resolve(resolve) => { Commands::Resolve(resolve) => {
resolve.run(client).await?; resolve.run(client).await?;
} }
Commands::Metrics(metrics) => {
metrics.run(client, events).await?;
}
Commands::IdmSnoop(snoop) => {
snoop.run(client, events).await?;
}
Commands::Top(top) => {
top.run(client, events).await?;
}
Commands::Pull(pull) => {
pull.run(client).await?;
}
Commands::IdentifyHost(identify) => {
identify.run(client).await?;
}
Commands::Exec(exec) => {
exec.run(client).await?;
}
} }
Ok(()) Ok(())
} }

View File

@ -0,0 +1,47 @@
use anyhow::Result;
use clap::{Parser, ValueEnum};
use krata::v1::{
common::OciImageFormat,
control::{control_service_client::ControlServiceClient, PullImageRequest},
};
use tonic::transport::Channel;
use crate::pull::pull_interactive_progress;
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
pub enum PullImageFormat {
Squashfs,
Erofs,
Tar,
}
#[derive(Parser)]
#[command(about = "Pull an image into the cache")]
pub struct PullCommand {
#[arg(help = "Image name")]
image: String,
#[arg(short = 's', long, default_value = "squashfs", help = "Image format")]
image_format: PullImageFormat,
#[arg(short = 'o', long, help = "Overwrite image cache")]
overwrite_cache: bool,
}
impl PullCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let response = client
.pull_image(PullImageRequest {
image: self.image.clone(),
format: match self.image_format {
PullImageFormat::Squashfs => OciImageFormat::Squashfs.into(),
PullImageFormat::Erofs => OciImageFormat::Erofs.into(),
PullImageFormat::Tar => OciImageFormat::Tar.into(),
},
overwrite_cache: self.overwrite_cache,
})
.await?;
let reply = pull_interactive_progress(response.into_inner()).await?;
println!("{}", reply.digest);
Ok(())
}
}

215
crates/ctl/src/cli/top.rs Normal file
View File

@ -0,0 +1,215 @@
use anyhow::Result;
use clap::Parser;
use krata::{events::EventStream, v1::control::control_service_client::ControlServiceClient};
use std::{
io::{self, stdout, Stdout},
time::Duration,
};
use tokio::select;
use tokio_stream::StreamExt;
use tonic::transport::Channel;
use crossterm::{
event::{Event, KeyCode, KeyEvent, KeyEventKind},
execute,
terminal::*,
};
use ratatui::{
prelude::*,
symbols::border,
widgets::{
block::{Position, Title},
Block, Borders, Row, Table, TableState,
},
};
use crate::{
format::guest_status_text,
metrics::{
lookup_metric_value, MultiMetricCollector, MultiMetricCollectorHandle, MultiMetricState,
},
};
#[derive(Parser)]
#[command(about = "Dashboard for running guests")]
pub struct TopCommand {}
pub type Tui = Terminal<CrosstermBackend<Stdout>>;
impl TopCommand {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
let collector = MultiMetricCollector::new(client, events, Duration::from_millis(200))?;
let collector = collector.launch().await?;
let mut tui = TopCommand::init()?;
let mut app = TopApp {
metrics: MultiMetricState { guests: vec![] },
exit: false,
table: TableState::new(),
};
app.run(collector, &mut tui).await?;
TopCommand::restore()?;
Ok(())
}
pub fn init() -> io::Result<Tui> {
execute!(stdout(), EnterAlternateScreen)?;
enable_raw_mode()?;
Terminal::new(CrosstermBackend::new(stdout()))
}
pub fn restore() -> io::Result<()> {
execute!(stdout(), LeaveAlternateScreen)?;
disable_raw_mode()?;
Ok(())
}
}
pub struct TopApp {
table: TableState,
metrics: MultiMetricState,
exit: bool,
}
impl TopApp {
pub async fn run(
&mut self,
mut collector: MultiMetricCollectorHandle,
terminal: &mut Tui,
) -> Result<()> {
let mut events = crossterm::event::EventStream::new();
while !self.exit {
terminal.draw(|frame| self.render_frame(frame))?;
select! {
x = collector.receiver.recv() => match x {
Some(state) => {
self.metrics = state;
},
None => {
break;
}
},
x = events.next() => match x {
Some(event) => {
let event = event?;
self.handle_event(event)?;
},
None => {
break;
}
}
};
}
Ok(())
}
fn render_frame(&mut self, frame: &mut Frame) {
frame.render_widget(self, frame.size());
}
fn handle_event(&mut self, event: Event) -> io::Result<()> {
match event {
Event::Key(key_event) if key_event.kind == KeyEventKind::Press => {
self.handle_key_event(key_event)
}
_ => {}
};
Ok(())
}
fn exit(&mut self) {
self.exit = true;
}
fn handle_key_event(&mut self, key_event: KeyEvent) {
if let KeyCode::Char('q') = key_event.code {
self.exit()
}
}
}
impl Widget for &mut TopApp {
fn render(self, area: Rect, buf: &mut Buffer) {
let title = Title::from(" krata hypervisor ".bold());
let instructions = Title::from(vec![" Quit ".into(), "<Q> ".blue().bold()]);
let block = Block::default()
.title(title.alignment(Alignment::Center))
.title(
instructions
.alignment(Alignment::Center)
.position(Position::Bottom),
)
.borders(Borders::ALL)
.border_set(border::THICK);
let mut rows = vec![];
for ms in &self.metrics.guests {
let Some(ref spec) = ms.guest.spec else {
continue;
};
let Some(ref state) = ms.guest.state else {
continue;
};
let memory_total = ms
.root
.as_ref()
.and_then(|root| lookup_metric_value(root, "system/memory/total"));
let memory_used = ms
.root
.as_ref()
.and_then(|root| lookup_metric_value(root, "system/memory/used"));
let memory_free = ms
.root
.as_ref()
.and_then(|root| lookup_metric_value(root, "system/memory/free"));
let row = Row::new(vec![
spec.name.clone(),
ms.guest.id.clone(),
guest_status_text(state.status()),
memory_total.unwrap_or_default(),
memory_used.unwrap_or_default(),
memory_free.unwrap_or_default(),
]);
rows.push(row);
}
let widths = [
Constraint::Min(8),
Constraint::Min(8),
Constraint::Min(8),
Constraint::Min(8),
Constraint::Min(8),
Constraint::Min(8),
];
let table = Table::new(rows, widths)
.header(
Row::new(vec![
"name",
"id",
"status",
"total memory",
"used memory",
"free memory",
])
.style(Style::new().bold())
.bottom_margin(1),
)
.column_spacing(1)
.block(block);
StatefulWidget::render(table, area, buf, &mut self.table);
}
}

View File

@ -28,12 +28,10 @@ impl WatchCommand {
let mut stream = events.subscribe(); let mut stream = events.subscribe();
loop { loop {
let event = stream.recv().await?; let event = stream.recv().await?;
match event {
Event::GuestChanged(changed) => { let Event::GuestChanged(changed) = event;
let guest = changed.guest.clone(); let guest = changed.guest.clone();
self.print_event("guest.changed", changed, guest)?; self.print_event("guest.changed", changed, guest)?;
}
}
} }
} }

View File

@ -1,4 +1,4 @@
use anyhow::Result; use anyhow::{anyhow, Result};
use async_stream::stream; use async_stream::stream;
use crossterm::{ use crossterm::{
terminal::{disable_raw_mode, enable_raw_mode, is_raw_mode_enabled}, terminal::{disable_raw_mode, enable_raw_mode, is_raw_mode_enabled},
@ -8,12 +8,15 @@ use krata::{
events::EventStream, events::EventStream,
v1::{ v1::{
common::GuestStatus, common::GuestStatus,
control::{watch_events_reply::Event, ConsoleDataReply, ConsoleDataRequest}, control::{
watch_events_reply::Event, ConsoleDataReply, ConsoleDataRequest, ExecGuestReply,
ExecGuestRequest,
},
}, },
}; };
use log::debug; use log::debug;
use tokio::{ use tokio::{
io::{stdin, stdout, AsyncReadExt, AsyncWriteExt}, io::{stderr, stdin, stdout, AsyncReadExt, AsyncWriteExt},
task::JoinHandle, task::JoinHandle,
}; };
use tokio_stream::{Stream, StreamExt}; use tokio_stream::{Stream, StreamExt};
@ -45,6 +48,31 @@ impl StdioConsoleStream {
} }
} }
pub async fn stdin_stream_exec(
initial: ExecGuestRequest,
) -> impl Stream<Item = ExecGuestRequest> {
let mut stdin = stdin();
stream! {
yield initial;
let mut buffer = vec![0u8; 60];
loop {
let size = match stdin.read(&mut buffer).await {
Ok(size) => size,
Err(error) => {
debug!("failed to read stdin: {}", error);
break;
}
};
let data = buffer[0..size].to_vec();
if size == 1 && buffer[0] == 0x1d {
break;
}
yield ExecGuestRequest { guest_id: String::default(), task: None, data };
}
}
}
pub async fn stdout(mut stream: Streaming<ConsoleDataReply>) -> Result<()> { pub async fn stdout(mut stream: Streaming<ConsoleDataReply>) -> Result<()> {
if stdin().is_tty() { if stdin().is_tty() {
enable_raw_mode()?; enable_raw_mode()?;
@ -62,6 +90,32 @@ impl StdioConsoleStream {
Ok(()) Ok(())
} }
pub async fn exec_output(mut stream: Streaming<ExecGuestReply>) -> Result<i32> {
let mut stdout = stdout();
let mut stderr = stderr();
while let Some(reply) = stream.next().await {
let reply = reply?;
if !reply.stdout.is_empty() {
stdout.write_all(&reply.stdout).await?;
stdout.flush().await?;
}
if !reply.stderr.is_empty() {
stderr.write_all(&reply.stderr).await?;
stderr.flush().await?;
}
if reply.exited {
if reply.error.is_empty() {
return Ok(reply.exit_code);
} else {
return Err(anyhow!("exec failed: {}", reply.error));
}
}
}
Ok(-1)
}
pub async fn guest_exit_hook( pub async fn guest_exit_hook(
id: String, id: String,
events: EventStream, events: EventStream,
@ -69,29 +123,26 @@ impl StdioConsoleStream {
Ok(tokio::task::spawn(async move { Ok(tokio::task::spawn(async move {
let mut stream = events.subscribe(); let mut stream = events.subscribe();
while let Ok(event) = stream.recv().await { while let Ok(event) = stream.recv().await {
match event { let Event::GuestChanged(changed) = event;
Event::GuestChanged(changed) => { let Some(guest) = changed.guest else {
let Some(guest) = changed.guest else { continue;
continue; };
};
let Some(state) = guest.state else { let Some(state) = guest.state else {
continue; continue;
}; };
if guest.id != id { if guest.id != id {
continue; continue;
} }
if let Some(exit_info) = state.exit_info { if let Some(exit_info) = state.exit_info {
return Some(exit_info.code); return Some(exit_info.code);
} }
let status = state.status(); let status = state.status();
if status == GuestStatus::Destroying || status == GuestStatus::Destroyed { if status == GuestStatus::Destroying || status == GuestStatus::Destroyed {
return Some(10); return Some(10);
}
}
} }
} }
None None

View File

@ -1,8 +1,12 @@
use std::collections::HashMap; use std::{collections::HashMap, time::Duration};
use anyhow::Result; use anyhow::Result;
use krata::v1::common::{Guest, GuestStatus}; use fancy_duration::FancyDuration;
use prost_reflect::{DynamicMessage, ReflectMessage, Value}; use human_bytes::human_bytes;
use krata::v1::common::{Guest, GuestMetricFormat, GuestMetricNode, GuestStatus};
use prost_reflect::{DynamicMessage, ReflectMessage};
use prost_types::Value;
use termtree::Tree;
pub fn proto2dynamic(proto: impl ReflectMessage) -> Result<DynamicMessage> { pub fn proto2dynamic(proto: impl ReflectMessage) -> Result<DynamicMessage> {
Ok(DynamicMessage::decode( Ok(DynamicMessage::decode(
@ -11,46 +15,59 @@ pub fn proto2dynamic(proto: impl ReflectMessage) -> Result<DynamicMessage> {
)?) )?)
} }
pub fn proto2kv(proto: impl ReflectMessage) -> Result<HashMap<String, String>> { pub fn value2kv(value: serde_json::Value) -> Result<HashMap<String, String>> {
let message = proto2dynamic(proto)?;
let mut map = HashMap::new(); let mut map = HashMap::new();
fn crawl(prefix: String, map: &mut HashMap<String, String>, value: serde_json::Value) {
fn crawl(prefix: &str, map: &mut HashMap<String, String>, message: &DynamicMessage) { fn dot(prefix: &str, next: String) -> String {
for (field, value) in message.fields() { if prefix.is_empty() {
let path = if prefix.is_empty() { next.to_string()
field.name().to_string()
} else { } else {
format!("{}.{}", prefix, field.name()) format!("{}.{}", prefix, next)
}; }
match value { }
Value::Message(child) => {
crawl(&path, map, child);
}
Value::EnumNumber(number) => { match value {
if let Some(e) = field.kind().as_enum() { serde_json::Value::Null => {
if let Some(value) = e.get_value(*number) { map.insert(prefix, "null".to_string());
map.insert(path, value.name().to_string()); }
}
}
}
Value::String(value) => { serde_json::Value::String(value) => {
map.insert(path, value.clone()); map.insert(prefix, value);
} }
_ => { serde_json::Value::Bool(value) => {
map.insert(path, value.to_string()); map.insert(prefix, value.to_string());
}
serde_json::Value::Number(value) => {
map.insert(prefix, value.to_string());
}
serde_json::Value::Array(value) => {
for (i, item) in value.into_iter().enumerate() {
let next = dot(&prefix, i.to_string());
crawl(next, map, item);
}
}
serde_json::Value::Object(value) => {
for (key, item) in value {
let next = dot(&prefix, key);
crawl(next, map, item);
} }
} }
} }
} }
crawl("".to_string(), &mut map, value);
crawl("", &mut map, &message);
Ok(map) Ok(map)
} }
pub fn proto2kv(proto: impl ReflectMessage) -> Result<HashMap<String, String>> {
let message = proto2dynamic(proto)?;
let value = serde_json::to_value(message)?;
value2kv(value)
}
pub fn kv2line(map: HashMap<String, String>) -> String { pub fn kv2line(map: HashMap<String, String>) -> String {
map.iter() map.iter()
.map(|(k, v)| format!("{}=\"{}\"", k, v.replace('"', "\\\""))) .map(|(k, v)| format!("{}=\"{}\"", k, v.replace('"', "\\\"")))
@ -85,3 +102,63 @@ pub fn guest_simple_line(guest: &Guest) -> String {
let ipv6 = network.map(|x| x.guest_ipv6.as_str()).unwrap_or(""); let ipv6 = network.map(|x| x.guest_ipv6.as_str()).unwrap_or("");
format!("{}\t{}\t{}\t{}\t{}", guest.id, state, name, ipv4, ipv6) format!("{}\t{}\t{}\t{}\t{}", guest.id, state, name, ipv4, ipv6)
} }
fn metrics_value_string(value: Value) -> String {
proto2dynamic(value)
.map(|x| serde_json::to_string(&x).ok())
.ok()
.flatten()
.unwrap_or_default()
}
fn metrics_value_numeric(value: Value) -> f64 {
let string = metrics_value_string(value);
string.parse::<f64>().ok().unwrap_or(f64::NAN)
}
pub fn metrics_value_pretty(value: Value, format: GuestMetricFormat) -> String {
match format {
GuestMetricFormat::Bytes => human_bytes(metrics_value_numeric(value)),
GuestMetricFormat::Integer => (metrics_value_numeric(value) as u64).to_string(),
GuestMetricFormat::DurationSeconds => {
FancyDuration(Duration::from_secs_f64(metrics_value_numeric(value))).to_string()
}
_ => metrics_value_string(value),
}
}
fn metrics_flat_internal(prefix: &str, node: GuestMetricNode, map: &mut HashMap<String, String>) {
if let Some(value) = node.value {
map.insert(prefix.to_string(), metrics_value_string(value));
}
for child in node.children {
let path = if prefix.is_empty() {
child.name.to_string()
} else {
format!("{}.{}", prefix, child.name)
};
metrics_flat_internal(&path, child, map);
}
}
pub fn metrics_flat(root: GuestMetricNode) -> HashMap<String, String> {
let mut map = HashMap::new();
metrics_flat_internal("", root, &mut map);
map
}
pub fn metrics_tree(node: GuestMetricNode) -> Tree<String> {
let mut name = node.name.to_string();
let format = node.format();
if let Some(value) = node.value {
let value_string = metrics_value_pretty(value, format);
name.push_str(&format!(": {}", value_string));
}
let mut tree = Tree::new(name);
for child in node.children {
tree.push(metrics_tree(child));
}
tree
}

View File

@ -1,3 +1,5 @@
pub mod cli; pub mod cli;
pub mod console; pub mod console;
pub mod format; pub mod format;
pub mod metrics;
pub mod pull;

158
crates/ctl/src/metrics.rs Normal file
View File

@ -0,0 +1,158 @@
use anyhow::Result;
use krata::{
events::EventStream,
v1::{
common::{Guest, GuestMetricNode, GuestStatus},
control::{
control_service_client::ControlServiceClient, watch_events_reply::Event,
ListGuestsRequest, ReadGuestMetricsRequest,
},
},
};
use log::error;
use std::time::Duration;
use tokio::{
select,
sync::mpsc::{channel, Receiver, Sender},
task::JoinHandle,
time::{sleep, timeout},
};
use tonic::transport::Channel;
use crate::format::metrics_value_pretty;
pub struct MetricState {
pub guest: Guest,
pub root: Option<GuestMetricNode>,
}
pub struct MultiMetricState {
pub guests: Vec<MetricState>,
}
pub struct MultiMetricCollector {
client: ControlServiceClient<Channel>,
events: EventStream,
period: Duration,
}
pub struct MultiMetricCollectorHandle {
pub receiver: Receiver<MultiMetricState>,
task: JoinHandle<()>,
}
impl Drop for MultiMetricCollectorHandle {
fn drop(&mut self) {
self.task.abort();
}
}
impl MultiMetricCollector {
pub fn new(
client: ControlServiceClient<Channel>,
events: EventStream,
period: Duration,
) -> Result<MultiMetricCollector> {
Ok(MultiMetricCollector {
client,
events,
period,
})
}
pub async fn launch(mut self) -> Result<MultiMetricCollectorHandle> {
let (sender, receiver) = channel::<MultiMetricState>(100);
let task = tokio::task::spawn(async move {
if let Err(error) = self.process(sender).await {
error!("failed to process multi metric collector: {}", error);
}
});
Ok(MultiMetricCollectorHandle { receiver, task })
}
pub async fn process(&mut self, sender: Sender<MultiMetricState>) -> Result<()> {
let mut events = self.events.subscribe();
let mut guests: Vec<Guest> = self
.client
.list_guests(ListGuestsRequest {})
.await?
.into_inner()
.guests;
loop {
let collect = select! {
x = events.recv() => match x {
Ok(event) => {
let Event::GuestChanged(changed) = event;
let Some(guest) = changed.guest else {
continue;
};
let Some(ref state) = guest.state else {
continue;
};
guests.retain(|x| x.id != guest.id);
if state.status() != GuestStatus::Destroying {
guests.push(guest);
}
false
},
Err(error) => {
return Err(error.into());
}
},
_ = sleep(self.period) => {
true
}
};
if !collect {
continue;
}
let mut metrics = Vec::new();
for guest in &guests {
let Some(ref state) = guest.state else {
continue;
};
if state.status() != GuestStatus::Started {
continue;
}
let root = timeout(
Duration::from_secs(5),
self.client.read_guest_metrics(ReadGuestMetricsRequest {
guest_id: guest.id.clone(),
}),
)
.await
.ok()
.and_then(|x| x.ok())
.map(|x| x.into_inner())
.and_then(|x| x.root);
metrics.push(MetricState {
guest: guest.clone(),
root,
});
}
sender.send(MultiMetricState { guests: metrics }).await?;
}
}
}
pub fn lookup<'a>(node: &'a GuestMetricNode, path: &str) -> Option<&'a GuestMetricNode> {
let Some((what, b)) = path.split_once('/') else {
return node.children.iter().find(|x| x.name == path);
};
let next = node.children.iter().find(|x| x.name == what)?;
return lookup(next, b);
}
pub fn lookup_metric_value(node: &GuestMetricNode, path: &str) -> Option<String> {
lookup(node, path).and_then(|x| {
x.value
.as_ref()
.map(|v| metrics_value_pretty(v.clone(), x.format()))
})
}

268
crates/ctl/src/pull.rs Normal file
View File

@ -0,0 +1,268 @@
use std::{
collections::{hash_map::Entry, HashMap},
time::Duration,
};
use anyhow::{anyhow, Result};
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use krata::v1::control::{
image_progress_indication::Indication, ImageProgressIndication, ImageProgressLayerPhase,
ImageProgressPhase, PullImageReply,
};
use tokio_stream::StreamExt;
use tonic::Streaming;
const SPINNER_STRINGS: &[&str] = &[
"[= ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ = ]",
"[ =]",
"[====================]",
];
fn progress_bar_for_indication(indication: &ImageProgressIndication) -> Option<ProgressBar> {
match indication.indication.as_ref() {
Some(Indication::Hidden(_)) | None => None,
Some(Indication::Bar(indic)) => {
let bar = ProgressBar::new(indic.total);
bar.enable_steady_tick(Duration::from_millis(100));
Some(bar)
}
Some(Indication::Spinner(_)) => {
let bar = ProgressBar::new_spinner();
bar.enable_steady_tick(Duration::from_millis(100));
Some(bar)
}
Some(Indication::Completed(indic)) => {
let bar = ProgressBar::new_spinner();
bar.enable_steady_tick(Duration::from_millis(100));
if !indic.message.is_empty() {
bar.finish_with_message(indic.message.clone());
} else {
bar.finish()
}
Some(bar)
}
}
}
fn configure_for_indication(
bar: &mut ProgressBar,
multi_progress: &mut MultiProgress,
indication: &ImageProgressIndication,
top_phase: Option<ImageProgressPhase>,
layer_phase: Option<ImageProgressLayerPhase>,
layer_id: Option<&str>,
) {
let prefix = if let Some(phase) = top_phase {
match phase {
ImageProgressPhase::Unknown => "unknown",
ImageProgressPhase::Started => "started",
ImageProgressPhase::Resolving => "resolving",
ImageProgressPhase::Resolved => "resolved",
ImageProgressPhase::ConfigDownload => "downloading",
ImageProgressPhase::LayerDownload => "downloading",
ImageProgressPhase::Assemble => "assembling",
ImageProgressPhase::Pack => "packing",
ImageProgressPhase::Complete => "complete",
}
} else if let Some(phase) = layer_phase {
match phase {
ImageProgressLayerPhase::Unknown => "unknown",
ImageProgressLayerPhase::Waiting => "waiting",
ImageProgressLayerPhase::Downloading => "downloading",
ImageProgressLayerPhase::Downloaded => "downloaded",
ImageProgressLayerPhase::Extracting => "extracting",
ImageProgressLayerPhase::Extracted => "extracted",
}
} else {
""
};
let prefix = prefix.to_string();
let id = if let Some(layer_id) = layer_id {
let hash = if let Some((_, hash)) = layer_id.split_once(':') {
hash
} else {
"unknown"
};
let small_hash = if hash.len() > 10 { &hash[0..10] } else { hash };
Some(format!("{:width$}", small_hash, width = 10))
} else {
None
};
let prefix = if let Some(id) = id {
format!("{} {:width$}", id, prefix, width = 11)
} else {
format!(" {:width$}", prefix, width = 11)
};
match indication.indication.as_ref() {
Some(Indication::Hidden(_)) | None => {
multi_progress.remove(bar);
return;
}
Some(Indication::Bar(indic)) => {
if indic.is_bytes {
bar.set_style(ProgressStyle::with_template("{prefix} [{bar:20}] {msg} {binary_bytes}/{binary_total_bytes} ({binary_bytes_per_sec}) eta: {eta}").unwrap().progress_chars("=>-"));
} else {
bar.set_style(
ProgressStyle::with_template(
"{prefix} [{bar:20} {msg} {human_pos}/{human_len} ({per_sec}/sec)",
)
.unwrap()
.progress_chars("=>-"),
);
}
bar.set_message(indic.message.clone());
bar.set_position(indic.current);
bar.set_length(indic.total);
}
Some(Indication::Spinner(indic)) => {
bar.set_style(
ProgressStyle::with_template("{prefix} {spinner} {msg}")
.unwrap()
.tick_strings(SPINNER_STRINGS),
);
bar.set_message(indic.message.clone());
}
Some(Indication::Completed(indic)) => {
if bar.is_finished() {
return;
}
bar.disable_steady_tick();
bar.set_message(indic.message.clone());
if indic.total != 0 {
bar.set_position(indic.total);
bar.set_length(indic.total);
}
if bar.style().get_tick_str(0).contains('=') {
bar.set_style(
ProgressStyle::with_template("{prefix} {spinner} {msg}")
.unwrap()
.tick_strings(SPINNER_STRINGS),
);
bar.finish_with_message(indic.message.clone());
} else if indic.is_bytes {
bar.set_style(
ProgressStyle::with_template("{prefix} [{bar:20}] {msg} {binary_total_bytes}")
.unwrap()
.progress_chars("=>-"),
);
} else {
bar.set_style(
ProgressStyle::with_template("{prefix} [{bar:20}] {msg}")
.unwrap()
.progress_chars("=>-"),
);
}
bar.tick();
bar.enable_steady_tick(Duration::from_millis(100));
}
};
bar.set_prefix(prefix);
bar.tick();
}
pub async fn pull_interactive_progress(
mut stream: Streaming<PullImageReply>,
) -> Result<PullImageReply> {
let mut multi_progress = MultiProgress::new();
multi_progress.set_move_cursor(false);
let mut progresses = HashMap::new();
while let Some(reply) = stream.next().await {
let reply = match reply {
Ok(reply) => reply,
Err(error) => {
multi_progress.clear()?;
return Err(error.into());
}
};
if reply.progress.is_none() && !reply.digest.is_empty() {
multi_progress.clear()?;
return Ok(reply);
}
let Some(oci) = reply.progress else {
continue;
};
for layer in &oci.layers {
let Some(ref indication) = layer.indication else {
continue;
};
let bar = match progresses.entry(layer.id.clone()) {
Entry::Occupied(entry) => Some(entry.into_mut()),
Entry::Vacant(entry) => {
if let Some(bar) = progress_bar_for_indication(indication) {
multi_progress.add(bar.clone());
Some(entry.insert(bar))
} else {
None
}
}
};
if let Some(bar) = bar {
configure_for_indication(
bar,
&mut multi_progress,
indication,
None,
Some(layer.phase()),
Some(&layer.id),
);
}
}
if let Some(ref indication) = oci.indication {
let bar = match progresses.entry("root".to_string()) {
Entry::Occupied(entry) => Some(entry.into_mut()),
Entry::Vacant(entry) => {
if let Some(bar) = progress_bar_for_indication(indication) {
multi_progress.add(bar.clone());
Some(entry.insert(bar))
} else {
None
}
}
};
if let Some(bar) = bar {
configure_for_indication(
bar,
&mut multi_progress,
indication,
Some(oci.phase()),
None,
None,
);
}
}
}
multi_progress.clear()?;
Err(anyhow!("never received final reply for image pull"))
}

View File

@ -17,14 +17,17 @@ circular-buffer = { workspace = true }
clap = { workspace = true } clap = { workspace = true }
env_logger = { workspace = true } env_logger = { workspace = true }
futures = { workspace = true } futures = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" } krata = { path = "../krata", version = "^0.0.10" }
krata-runtime = { path = "../runtime", version = "^0.0.7" } krata-oci = { path = "../oci", version = "^0.0.10" }
krata-runtime = { path = "../runtime", version = "^0.0.10" }
log = { workspace = true } log = { workspace = true }
prost = { workspace = true } prost = { workspace = true }
redb = { workspace = true } redb = { workspace = true }
scopeguard = { workspace = true }
signal-hook = { workspace = true } signal-hook = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
tokio-stream = { workspace = true } tokio-stream = { workspace = true }
krata-tokio-tar = { workspace = true }
tonic = { workspace = true, features = ["tls"] } tonic = { workspace = true, features = ["tls"] }
uuid = { workspace = true } uuid = { workspace = true }

View File

@ -1,22 +1,9 @@
use anyhow::Result; use anyhow::Result;
use clap::Parser; use clap::Parser;
use env_logger::Env; use env_logger::Env;
use krata::dial::ControlDialAddress; use kratad::command::DaemonCommand;
use kratad::Daemon;
use kratart::Runtime;
use log::LevelFilter; use log::LevelFilter;
use std::{ use std::sync::{atomic::AtomicBool, Arc};
str::FromStr,
sync::{atomic::AtomicBool, Arc},
};
#[derive(Parser)]
struct DaemonCommand {
#[arg(short, long, default_value = "unix:///var/lib/krata/daemon.socket")]
listen: String,
#[arg(short, long, default_value = "/var/lib/krata")]
store: String,
}
#[tokio::main(flavor = "multi_thread", worker_threads = 10)] #[tokio::main(flavor = "multi_thread", worker_threads = 10)]
async fn main() -> Result<()> { async fn main() -> Result<()> {
@ -25,12 +12,8 @@ async fn main() -> Result<()> {
.init(); .init();
mask_sighup()?; mask_sighup()?;
let args = DaemonCommand::parse(); let command = DaemonCommand::parse();
let addr = ControlDialAddress::from_str(&args.listen)?; command.run().await
let runtime = Runtime::new(args.store.clone()).await?;
let mut daemon = Daemon::new(args.store.clone(), runtime).await?;
daemon.listen(addr).await?;
Ok(())
} }
fn mask_sighup() -> Result<()> { fn mask_sighup() -> Result<()> {

View File

@ -0,0 +1,36 @@
use anyhow::Result;
use clap::{CommandFactory, Parser};
use krata::dial::ControlDialAddress;
use std::str::FromStr;
use crate::Daemon;
#[derive(Parser)]
#[command(version, about = "Krata hypervisor daemon")]
pub struct DaemonCommand {
#[arg(
short,
long,
default_value = "unix:///var/lib/krata/daemon.socket",
help = "Listen address"
)]
listen: String,
#[arg(short, long, default_value = "/var/lib/krata", help = "Storage path")]
store: String,
}
impl DaemonCommand {
pub async fn run(self) -> Result<()> {
let addr = ControlDialAddress::from_str(&self.listen)?;
let mut daemon = Daemon::new(self.store.clone()).await?;
daemon.listen(addr).await?;
Ok(())
}
pub fn version() -> String {
DaemonCommand::command()
.get_version()
.unwrap_or("unknown")
.to_string()
}
}

View File

@ -1,6 +1,6 @@
use std::{collections::HashMap, sync::Arc}; use std::{collections::HashMap, sync::Arc};
use anyhow::Result; use anyhow::{anyhow, Result};
use circular_buffer::CircularBuffer; use circular_buffer::CircularBuffer;
use kratart::channel::ChannelService; use kratart::channel::ChannelService;
use log::error; use log::error;
@ -11,6 +11,9 @@ use tokio::{
}, },
task::JoinHandle, task::JoinHandle,
}; };
use uuid::Uuid;
use crate::glt::GuestLookupTable;
const CONSOLE_BUFFER_SIZE: usize = 1024 * 1024; const CONSOLE_BUFFER_SIZE: usize = 1024 * 1024;
type RawConsoleBuffer = CircularBuffer<CONSOLE_BUFFER_SIZE, u8>; type RawConsoleBuffer = CircularBuffer<CONSOLE_BUFFER_SIZE, u8>;
@ -21,6 +24,7 @@ type BufferMap = Arc<Mutex<HashMap<u32, ConsoleBuffer>>>;
#[derive(Clone)] #[derive(Clone)]
pub struct DaemonConsoleHandle { pub struct DaemonConsoleHandle {
glt: GuestLookupTable,
listeners: ListenerMap, listeners: ListenerMap,
buffers: BufferMap, buffers: BufferMap,
sender: Sender<(u32, Vec<u8>)>, sender: Sender<(u32, Vec<u8>)>,
@ -50,9 +54,12 @@ impl DaemonConsoleAttachHandle {
impl DaemonConsoleHandle { impl DaemonConsoleHandle {
pub async fn attach( pub async fn attach(
&self, &self,
domid: u32, uuid: Uuid,
sender: Sender<Vec<u8>>, sender: Sender<Vec<u8>>,
) -> Result<DaemonConsoleAttachHandle> { ) -> Result<DaemonConsoleAttachHandle> {
let Some(domid) = self.glt.lookup_domid_by_uuid(&uuid).await else {
return Err(anyhow!("unable to find domain {}", uuid));
};
let buffers = self.buffers.lock().await; let buffers = self.buffers.lock().await;
let buffer = buffers.get(&domid).map(|x| x.to_vec()).unwrap_or_default(); let buffer = buffers.get(&domid).map(|x| x.to_vec()).unwrap_or_default();
drop(buffers); drop(buffers);
@ -77,21 +84,23 @@ impl Drop for DaemonConsoleHandle {
} }
pub struct DaemonConsole { pub struct DaemonConsole {
glt: GuestLookupTable,
listeners: ListenerMap, listeners: ListenerMap,
buffers: BufferMap, buffers: BufferMap,
receiver: Receiver<(u32, Vec<u8>)>, receiver: Receiver<(u32, Option<Vec<u8>>)>,
sender: Sender<(u32, Vec<u8>)>, sender: Sender<(u32, Vec<u8>)>,
task: JoinHandle<()>, task: JoinHandle<()>,
} }
impl DaemonConsole { impl DaemonConsole {
pub async fn new() -> Result<DaemonConsole> { pub async fn new(glt: GuestLookupTable) -> Result<DaemonConsole> {
let (service, sender, receiver) = let (service, sender, receiver) =
ChannelService::new("krata-console".to_string(), Some(0)).await?; ChannelService::new("krata-console".to_string(), Some(0)).await?;
let task = service.launch().await?; let task = service.launch().await?;
let listeners = Arc::new(Mutex::new(HashMap::new())); let listeners = Arc::new(Mutex::new(HashMap::new()));
let buffers = Arc::new(Mutex::new(HashMap::new())); let buffers = Arc::new(Mutex::new(HashMap::new()));
Ok(DaemonConsole { Ok(DaemonConsole {
glt,
listeners, listeners,
buffers, buffers,
receiver, receiver,
@ -101,6 +110,7 @@ impl DaemonConsole {
} }
pub async fn launch(mut self) -> Result<DaemonConsoleHandle> { pub async fn launch(mut self) -> Result<DaemonConsoleHandle> {
let glt = self.glt.clone();
let listeners = self.listeners.clone(); let listeners = self.listeners.clone();
let buffers = self.buffers.clone(); let buffers = self.buffers.clone();
let sender = self.sender.clone(); let sender = self.sender.clone();
@ -110,6 +120,7 @@ impl DaemonConsole {
} }
}); });
Ok(DaemonConsoleHandle { Ok(DaemonConsoleHandle {
glt,
listeners, listeners,
buffers, buffers,
sender, sender,
@ -124,16 +135,22 @@ impl DaemonConsole {
}; };
let mut buffers = self.buffers.lock().await; let mut buffers = self.buffers.lock().await;
let buffer = buffers if let Some(data) = data {
.entry(domid) let buffer = buffers
.or_insert_with_key(|_| RawConsoleBuffer::boxed()); .entry(domid)
buffer.extend_from_slice(&data); .or_insert_with_key(|_| RawConsoleBuffer::boxed());
drop(buffers); buffer.extend_from_slice(&data);
let mut listeners = self.listeners.lock().await; drop(buffers);
if let Some(senders) = listeners.get_mut(&domid) { let mut listeners = self.listeners.lock().await;
senders.retain(|sender| { if let Some(senders) = listeners.get_mut(&domid) {
!matches!(sender.try_send(data.to_vec()), Err(TrySendError::Closed(_))) senders.retain(|sender| {
}); !matches!(sender.try_send(data.to_vec()), Err(TrySendError::Closed(_)))
});
}
} else {
buffers.remove(&domid);
let mut listeners = self.listeners.lock().await;
listeners.remove(&domid);
} }
} }
Ok(()) Ok(())

View File

@ -1,25 +1,43 @@
use std::{pin::Pin, str::FromStr};
use async_stream::try_stream; use async_stream::try_stream;
use futures::Stream; use futures::Stream;
use krata::v1::{ use krata::{
common::{Guest, GuestState, GuestStatus}, idm::internal::{
control::{ exec_stream_request_update::Update, request::Request as IdmRequestType,
control_service_server::ControlService, ConsoleDataReply, ConsoleDataRequest, response::Response as IdmResponseType, ExecEnvVar, ExecStreamRequestStart,
CreateGuestReply, CreateGuestRequest, DestroyGuestReply, DestroyGuestRequest, ExecStreamRequestStdin, ExecStreamRequestUpdate, MetricsRequest, Request as IdmRequest,
ListGuestsReply, ListGuestsRequest, ResolveGuestReply, ResolveGuestRequest, },
WatchEventsReply, WatchEventsRequest, v1::{
common::{Guest, GuestState, GuestStatus, OciImageFormat},
control::{
control_service_server::ControlService, ConsoleDataReply, ConsoleDataRequest,
CreateGuestReply, CreateGuestRequest, DestroyGuestReply, DestroyGuestRequest,
ExecGuestReply, ExecGuestRequest, IdentifyHostReply, IdentifyHostRequest,
ListGuestsReply, ListGuestsRequest, PullImageReply, PullImageRequest,
ReadGuestMetricsReply, ReadGuestMetricsRequest, ResolveGuestReply, ResolveGuestRequest,
SnoopIdmReply, SnoopIdmRequest, WatchEventsReply, WatchEventsRequest,
},
}, },
}; };
use krataoci::{
name::ImageName,
packer::{service::OciPackerService, OciPackedFormat, OciPackedImage},
progress::{OciProgress, OciProgressContext},
};
use std::{pin::Pin, str::FromStr};
use tokio::{ use tokio::{
select, select,
sync::mpsc::{channel, Sender}, sync::mpsc::{channel, Sender},
task::JoinError,
}; };
use tokio_stream::StreamExt; use tokio_stream::StreamExt;
use tonic::{Request, Response, Status, Streaming}; use tonic::{Request, Response, Status, Streaming};
use uuid::Uuid; use uuid::Uuid;
use crate::{console::DaemonConsoleHandle, db::GuestStore, event::DaemonEventContext}; use crate::{
command::DaemonCommand, console::DaemonConsoleHandle, db::GuestStore,
event::DaemonEventContext, glt::GuestLookupTable, idm::DaemonIdmHandle,
metrics::idm_metric_to_api, oci::convert_oci_progress,
};
pub struct ApiError { pub struct ApiError {
message: String, message: String,
@ -40,25 +58,34 @@ impl From<ApiError> for Status {
} }
#[derive(Clone)] #[derive(Clone)]
pub struct RuntimeControlService { pub struct DaemonControlService {
glt: GuestLookupTable,
events: DaemonEventContext, events: DaemonEventContext,
console: DaemonConsoleHandle, console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
guests: GuestStore, guests: GuestStore,
guest_reconciler_notify: Sender<Uuid>, guest_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
} }
impl RuntimeControlService { impl DaemonControlService {
pub fn new( pub fn new(
glt: GuestLookupTable,
events: DaemonEventContext, events: DaemonEventContext,
console: DaemonConsoleHandle, console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
guests: GuestStore, guests: GuestStore,
guest_reconciler_notify: Sender<Uuid>, guest_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
) -> Self { ) -> Self {
Self { Self {
glt,
events, events,
console, console,
idm,
guests, guests,
guest_reconciler_notify, guest_reconciler_notify,
packer,
} }
} }
} }
@ -68,14 +95,40 @@ enum ConsoleDataSelect {
Write(Option<Result<ConsoleDataRequest, tonic::Status>>), Write(Option<Result<ConsoleDataRequest, tonic::Status>>),
} }
enum PullImageSelect {
Progress(Option<OciProgress>),
Completed(Result<Result<OciPackedImage, anyhow::Error>, JoinError>),
}
#[tonic::async_trait] #[tonic::async_trait]
impl ControlService for RuntimeControlService { impl ControlService for DaemonControlService {
type ExecGuestStream =
Pin<Box<dyn Stream<Item = Result<ExecGuestReply, Status>> + Send + 'static>>;
type ConsoleDataStream = type ConsoleDataStream =
Pin<Box<dyn Stream<Item = Result<ConsoleDataReply, Status>> + Send + 'static>>; Pin<Box<dyn Stream<Item = Result<ConsoleDataReply, Status>> + Send + 'static>>;
type PullImageStream =
Pin<Box<dyn Stream<Item = Result<PullImageReply, Status>> + Send + 'static>>;
type WatchEventsStream = type WatchEventsStream =
Pin<Box<dyn Stream<Item = Result<WatchEventsReply, Status>> + Send + 'static>>; Pin<Box<dyn Stream<Item = Result<WatchEventsReply, Status>> + Send + 'static>>;
type SnoopIdmStream =
Pin<Box<dyn Stream<Item = Result<SnoopIdmReply, Status>> + Send + 'static>>;
async fn identify_host(
&self,
request: Request<IdentifyHostRequest>,
) -> Result<Response<IdentifyHostReply>, Status> {
let _ = request.into_inner();
Ok(Response::new(IdentifyHostReply {
host_domid: self.glt.host_domid(),
host_uuid: self.glt.host_uuid().to_string(),
krata_version: DaemonCommand::version(),
}))
}
async fn create_guest( async fn create_guest(
&self, &self,
request: Request<CreateGuestRequest>, request: Request<CreateGuestRequest>,
@ -98,6 +151,7 @@ impl ControlService for RuntimeControlService {
network: None, network: None,
exit_info: None, exit_info: None,
error_info: None, error_info: None,
host: self.glt.host_uuid().to_string(),
domid: u32::MAX, domid: u32::MAX,
}), }),
spec: Some(spec), spec: Some(spec),
@ -116,6 +170,98 @@ impl ControlService for RuntimeControlService {
})) }))
} }
async fn exec_guest(
&self,
request: Request<Streaming<ExecGuestRequest>>,
) -> Result<Response<Self::ExecGuestStream>, Status> {
let mut input = request.into_inner();
let Some(request) = input.next().await else {
return Err(ApiError {
message: "expected to have at least one request".to_string(),
}
.into());
};
let request = request?;
let Some(task) = request.task else {
return Err(ApiError {
message: "task is missing".to_string(),
}
.into());
};
let uuid = Uuid::from_str(&request.guest_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let idm = self.idm.client(uuid).await.map_err(|error| ApiError {
message: error.to_string(),
})?;
let idm_request = IdmRequest {
request: Some(IdmRequestType::ExecStream(ExecStreamRequestUpdate {
update: Some(Update::Start(ExecStreamRequestStart {
environment: task
.environment
.into_iter()
.map(|x| ExecEnvVar {
key: x.key,
value: x.value,
})
.collect(),
command: task.command,
working_directory: task.working_directory,
})),
})),
};
let output = try_stream! {
let mut handle = idm.send_stream(idm_request).await.map_err(|x| ApiError {
message: x.to_string(),
})?;
loop {
select! {
x = input.next() => if let Some(update) = x {
let update: Result<ExecGuestRequest, Status> = update.map_err(|error| ApiError {
message: error.to_string()
}.into());
if let Ok(update) = update {
if !update.data.is_empty() {
let _ = handle.update(IdmRequest {
request: Some(IdmRequestType::ExecStream(ExecStreamRequestUpdate {
update: Some(Update::Stdin(ExecStreamRequestStdin {
data: update.data,
})),
}))}).await;
}
}
},
x = handle.receiver.recv() => match x {
Some(response) => {
let Some(IdmResponseType::ExecStream(update)) = response.response else {
break;
};
let reply = ExecGuestReply {
exited: update.exited,
error: update.error,
exit_code: update.exit_code,
stdout: update.stdout,
stderr: update.stderr
};
yield reply;
},
None => {
break;
}
}
};
}
};
Ok(Response::new(Box::pin(output) as Self::ExecGuestStream))
}
async fn destroy_guest( async fn destroy_guest(
&self, &self,
request: Request<DestroyGuestRequest>, request: Request<DestroyGuestRequest>,
@ -198,36 +344,10 @@ impl ControlService for RuntimeControlService {
let uuid = Uuid::from_str(&request.guest_id).map_err(|error| ApiError { let uuid = Uuid::from_str(&request.guest_id).map_err(|error| ApiError {
message: error.to_string(), message: error.to_string(),
})?; })?;
let guest = self
.guests
.read(uuid)
.await
.map_err(|error| ApiError {
message: error.to_string(),
})?
.ok_or_else(|| ApiError {
message: "guest did not exist in the database".to_string(),
})?;
let Some(ref state) = guest.state else {
return Err(ApiError {
message: "guest did not have state".to_string(),
}
.into());
};
let domid = state.domid;
if domid == 0 {
return Err(ApiError {
message: "invalid domid on the guest".to_string(),
}
.into());
}
let (sender, mut receiver) = channel(100); let (sender, mut receiver) = channel(100);
let console = self let console = self
.console .console
.attach(domid, sender) .attach(uuid, sender)
.await .await
.map_err(|error| ApiError { .map_err(|error| ApiError {
message: format!("failed to attach to console: {}", error), message: format!("failed to attach to console: {}", error),
@ -269,6 +389,107 @@ impl ControlService for RuntimeControlService {
Ok(Response::new(Box::pin(output) as Self::ConsoleDataStream)) Ok(Response::new(Box::pin(output) as Self::ConsoleDataStream))
} }
async fn read_guest_metrics(
&self,
request: Request<ReadGuestMetricsRequest>,
) -> Result<Response<ReadGuestMetricsReply>, Status> {
let request = request.into_inner();
let uuid = Uuid::from_str(&request.guest_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let client = self.idm.client(uuid).await.map_err(|error| ApiError {
message: error.to_string(),
})?;
let response = client
.send(IdmRequest {
request: Some(IdmRequestType::Metrics(MetricsRequest {})),
})
.await
.map_err(|error| ApiError {
message: error.to_string(),
})?;
let mut reply = ReadGuestMetricsReply::default();
if let Some(IdmResponseType::Metrics(metrics)) = response.response {
reply.root = metrics.root.map(idm_metric_to_api);
}
Ok(Response::new(reply))
}
async fn pull_image(
&self,
request: Request<PullImageRequest>,
) -> Result<Response<Self::PullImageStream>, Status> {
let request = request.into_inner();
let name = ImageName::parse(&request.image).map_err(|err| ApiError {
message: err.to_string(),
})?;
let format = match request.format() {
OciImageFormat::Unknown => OciPackedFormat::Squashfs,
OciImageFormat::Squashfs => OciPackedFormat::Squashfs,
OciImageFormat::Erofs => OciPackedFormat::Erofs,
OciImageFormat::Tar => OciPackedFormat::Tar,
};
let (context, mut receiver) = OciProgressContext::create();
let our_packer = self.packer.clone();
let output = try_stream! {
let mut task = tokio::task::spawn(async move {
our_packer.request(name, format, request.overwrite_cache, context).await
});
let abort_handle = task.abort_handle();
let _task_cancel_guard = scopeguard::guard(abort_handle, |handle| {
handle.abort();
});
loop {
let what = select! {
x = receiver.changed() => match x {
Ok(_) => PullImageSelect::Progress(Some(receiver.borrow_and_update().clone())),
Err(_) => PullImageSelect::Progress(None),
},
x = &mut task => PullImageSelect::Completed(x),
};
match what {
PullImageSelect::Progress(Some(progress)) => {
let reply = PullImageReply {
progress: Some(convert_oci_progress(progress)),
digest: String::new(),
format: OciImageFormat::Unknown.into(),
};
yield reply;
},
PullImageSelect::Completed(result) => {
let result = result.map_err(|err| ApiError {
message: err.to_string(),
})?;
let packed = result.map_err(|err| ApiError {
message: err.to_string(),
})?;
let reply = PullImageReply {
progress: None,
digest: packed.digest,
format: match packed.format {
OciPackedFormat::Squashfs => OciImageFormat::Squashfs.into(),
OciPackedFormat::Erofs => OciImageFormat::Erofs.into(),
OciPackedFormat::Tar => OciImageFormat::Tar.into(),
},
};
yield reply;
break;
},
_ => {
continue;
}
}
}
};
Ok(Response::new(Box::pin(output) as Self::PullImageStream))
}
async fn watch_events( async fn watch_events(
&self, &self,
request: Request<WatchEventsRequest>, request: Request<WatchEventsRequest>,
@ -282,4 +503,25 @@ impl ControlService for RuntimeControlService {
}; };
Ok(Response::new(Box::pin(output) as Self::WatchEventsStream)) Ok(Response::new(Box::pin(output) as Self::WatchEventsStream))
} }
async fn snoop_idm(
&self,
request: Request<SnoopIdmRequest>,
) -> Result<Response<Self::SnoopIdmStream>, Status> {
let _ = request.into_inner();
let mut messages = self.idm.snoop();
let glt = self.glt.clone();
let output = try_stream! {
while let Ok(event) = messages.recv().await {
let Some(from_uuid) = glt.lookup_uuid_by_domid(event.from).await else {
continue;
};
let Some(to_uuid) = glt.lookup_uuid_by_domid(event.to).await else {
continue;
};
yield SnoopIdmReply { from: from_uuid.to_string(), to: to_uuid.to_string(), packet: Some(event.packet) };
}
};
Ok(Response::new(Box::pin(output) as Self::SnoopIdmStream))
}
} }

View File

@ -6,10 +6,10 @@ use std::{
use anyhow::Result; use anyhow::Result;
use krata::{ use krata::{
idm::protocol::{idm_event::Event, IdmPacket}, idm::{internal::event::Event as EventType, internal::Event},
v1::common::{GuestExitInfo, GuestState, GuestStatus}, v1::common::{GuestExitInfo, GuestState, GuestStatus},
}; };
use log::error; use log::{error, warn};
use tokio::{ use tokio::{
select, select,
sync::{ sync::{
@ -21,15 +21,12 @@ use tokio::{
}; };
use uuid::Uuid; use uuid::Uuid;
use crate::{ use crate::{db::GuestStore, idm::DaemonIdmHandle};
db::GuestStore,
idm::{DaemonIdmHandle, DaemonIdmSubscribeHandle},
};
pub type DaemonEvent = krata::v1::control::watch_events_reply::Event; pub type DaemonEvent = krata::v1::control::watch_events_reply::Event;
const EVENT_CHANNEL_QUEUE_LEN: usize = 1000; const EVENT_CHANNEL_QUEUE_LEN: usize = 1000;
const IDM_CHANNEL_QUEUE_LEN: usize = 1000; const IDM_EVENT_CHANNEL_QUEUE_LEN: usize = 1000;
#[derive(Clone)] #[derive(Clone)]
pub struct DaemonEventContext { pub struct DaemonEventContext {
@ -52,9 +49,9 @@ pub struct DaemonEventGenerator {
guest_reconciler_notify: Sender<Uuid>, guest_reconciler_notify: Sender<Uuid>,
feed: broadcast::Receiver<DaemonEvent>, feed: broadcast::Receiver<DaemonEvent>,
idm: DaemonIdmHandle, idm: DaemonIdmHandle,
idms: HashMap<u32, (Uuid, DaemonIdmSubscribeHandle)>, idms: HashMap<u32, (Uuid, JoinHandle<()>)>,
idm_sender: Sender<(u32, IdmPacket)>, idm_sender: Sender<(u32, Event)>,
idm_receiver: Receiver<(u32, IdmPacket)>, idm_receiver: Receiver<(u32, Event)>,
_event_sender: broadcast::Sender<DaemonEvent>, _event_sender: broadcast::Sender<DaemonEvent>,
} }
@ -65,7 +62,7 @@ impl DaemonEventGenerator {
idm: DaemonIdmHandle, idm: DaemonIdmHandle,
) -> Result<(DaemonEventContext, DaemonEventGenerator)> { ) -> Result<(DaemonEventContext, DaemonEventGenerator)> {
let (sender, _) = broadcast::channel(EVENT_CHANNEL_QUEUE_LEN); let (sender, _) = broadcast::channel(EVENT_CHANNEL_QUEUE_LEN);
let (idm_sender, idm_receiver) = channel(IDM_CHANNEL_QUEUE_LEN); let (idm_sender, idm_receiver) = channel(IDM_EVENT_CHANNEL_QUEUE_LEN);
let generator = DaemonEventGenerator { let generator = DaemonEventGenerator {
guests, guests,
guest_reconciler_notify, guest_reconciler_notify,
@ -81,46 +78,55 @@ impl DaemonEventGenerator {
} }
async fn handle_feed_event(&mut self, event: &DaemonEvent) -> Result<()> { async fn handle_feed_event(&mut self, event: &DaemonEvent) -> Result<()> {
match event { let DaemonEvent::GuestChanged(changed) = event;
DaemonEvent::GuestChanged(changed) => { let Some(ref guest) = changed.guest else {
let Some(ref guest) = changed.guest else { return Ok(());
return Ok(()); };
};
let Some(ref state) = guest.state else { let Some(ref state) = guest.state else {
return Ok(()); return Ok(());
}; };
let status = state.status(); let status = state.status();
let id = Uuid::from_str(&guest.id)?; let id = Uuid::from_str(&guest.id)?;
let domid = state.domid; let domid = state.domid;
match status { match status {
GuestStatus::Started => { GuestStatus::Started => {
if let Entry::Vacant(e) = self.idms.entry(domid) { if let Entry::Vacant(e) = self.idms.entry(domid) {
let subscribe = let client = self.idm.client_by_domid(domid).await?;
self.idm.subscribe(domid, self.idm_sender.clone()).await?; let mut receiver = client.subscribe().await?;
e.insert((id, subscribe)); let sender = self.idm_sender.clone();
let task = tokio::task::spawn(async move {
loop {
let Ok(event) = receiver.recv().await else {
break;
};
if let Err(error) = sender.send((domid, event)).await {
warn!("unable to deliver idm event: {}", error);
}
} }
} });
e.insert((id, task));
GuestStatus::Destroyed => {
if let Some((_, handle)) = self.idms.remove(&domid) {
handle.unsubscribe().await?;
}
}
_ => {}
} }
} }
GuestStatus::Destroyed => {
if let Some((_, handle)) = self.idms.remove(&domid) {
handle.abort();
}
}
_ => {}
} }
Ok(()) Ok(())
} }
async fn handle_idm_packet(&mut self, id: Uuid, packet: IdmPacket) -> Result<()> { async fn handle_idm_event(&mut self, id: Uuid, event: Event) -> Result<()> {
if let Some(Event::Exit(exit)) = packet.event.and_then(|x| x.event) { match event.event {
self.handle_exit_code(id, exit.code).await?; Some(EventType::Exit(exit)) => self.handle_exit_code(id, exit.code).await,
None => Ok(()),
} }
Ok(())
} }
async fn handle_exit_code(&mut self, id: Uuid, code: i32) -> Result<()> { async fn handle_exit_code(&mut self, id: Uuid, code: i32) -> Result<()> {
@ -130,6 +136,7 @@ impl DaemonEventGenerator {
network: guest.state.clone().unwrap_or_default().network, network: guest.state.clone().unwrap_or_default().network,
exit_info: Some(GuestExitInfo { code }), exit_info: Some(GuestExitInfo { code }),
error_info: None, error_info: None,
host: guest.state.clone().map(|x| x.host).unwrap_or_default(),
domid: guest.state.clone().map(|x| x.domid).unwrap_or(u32::MAX), domid: guest.state.clone().map(|x| x.domid).unwrap_or(u32::MAX),
}); });
@ -142,9 +149,9 @@ impl DaemonEventGenerator {
async fn evaluate(&mut self) -> Result<()> { async fn evaluate(&mut self) -> Result<()> {
select! { select! {
x = self.idm_receiver.recv() => match x { x = self.idm_receiver.recv() => match x {
Some((domid, packet)) => { Some((domid, event)) => {
if let Some((id, _)) = self.idms.get(&domid) { if let Some((id, _)) = self.idms.get(&domid) {
self.handle_idm_packet(*id, packet).await?; self.handle_idm_event(*id, event).await?;
} }
Ok(()) Ok(())
}, },
@ -159,7 +166,7 @@ impl DaemonEventGenerator {
Err(error) => { Err(error) => {
Err(error.into()) Err(error.into())
} }
} },
} }
} }

69
crates/daemon/src/glt.rs Normal file
View File

@ -0,0 +1,69 @@
use std::{collections::HashMap, sync::Arc};
use tokio::sync::RwLock;
use uuid::Uuid;
struct GuestLookupTableState {
domid_to_uuid: HashMap<u32, Uuid>,
uuid_to_domid: HashMap<Uuid, u32>,
}
impl GuestLookupTableState {
pub fn new(host_uuid: Uuid) -> Self {
let mut domid_to_uuid = HashMap::new();
let mut uuid_to_domid = HashMap::new();
domid_to_uuid.insert(0, host_uuid);
uuid_to_domid.insert(host_uuid, 0);
GuestLookupTableState {
domid_to_uuid,
uuid_to_domid,
}
}
}
#[derive(Clone)]
pub struct GuestLookupTable {
host_domid: u32,
host_uuid: Uuid,
state: Arc<RwLock<GuestLookupTableState>>,
}
impl GuestLookupTable {
pub fn new(host_domid: u32, host_uuid: Uuid) -> Self {
GuestLookupTable {
host_domid,
host_uuid,
state: Arc::new(RwLock::new(GuestLookupTableState::new(host_uuid))),
}
}
pub fn host_uuid(&self) -> Uuid {
self.host_uuid
}
pub fn host_domid(&self) -> u32 {
self.host_domid
}
pub async fn lookup_uuid_by_domid(&self, domid: u32) -> Option<Uuid> {
let state = self.state.read().await;
state.domid_to_uuid.get(&domid).cloned()
}
pub async fn lookup_domid_by_uuid(&self, uuid: &Uuid) -> Option<u32> {
let state = self.state.read().await;
state.uuid_to_domid.get(uuid).cloned()
}
pub async fn associate(&self, uuid: Uuid, domid: u32) {
let mut state = self.state.write().await;
state.uuid_to_domid.insert(uuid, domid);
state.domid_to_uuid.insert(domid, uuid);
}
pub async fn remove(&self, uuid: Uuid, domid: u32) {
let mut state = self.state.write().await;
state.uuid_to_domid.remove(&uuid);
state.domid_to_uuid.remove(&domid);
}
}

View File

@ -1,53 +1,58 @@
use std::{collections::HashMap, sync::Arc}; use std::{
collections::{hash_map::Entry, HashMap},
sync::Arc,
};
use anyhow::Result; use anyhow::{anyhow, Result};
use bytes::{Buf, BytesMut}; use bytes::{Buf, BytesMut};
use krata::idm::protocol::IdmPacket; use krata::idm::{
client::{IdmBackend, IdmInternalClient},
internal::INTERNAL_IDM_CHANNEL,
transport::IdmTransportPacket,
};
use kratart::channel::ChannelService; use kratart::channel::ChannelService;
use log::{error, warn}; use log::{error, warn};
use prost::Message; use prost::Message;
use tokio::{ use tokio::{
select,
sync::{ sync::{
mpsc::{Receiver, Sender}, broadcast,
mpsc::{channel, Receiver, Sender},
Mutex, Mutex,
}, },
task::JoinHandle, task::JoinHandle,
}; };
use uuid::Uuid;
type ListenerMap = Arc<Mutex<HashMap<u32, Sender<(u32, IdmPacket)>>>>; use crate::glt::GuestLookupTable;
type BackendFeedMap = Arc<Mutex<HashMap<u32, Sender<IdmTransportPacket>>>>;
type ClientMap = Arc<Mutex<HashMap<u32, IdmInternalClient>>>;
#[derive(Clone)] #[derive(Clone)]
pub struct DaemonIdmHandle { pub struct DaemonIdmHandle {
listeners: ListenerMap, glt: GuestLookupTable,
clients: ClientMap,
feeds: BackendFeedMap,
tx_sender: Sender<(u32, IdmTransportPacket)>,
task: Arc<JoinHandle<()>>, task: Arc<JoinHandle<()>>,
} snoop_sender: broadcast::Sender<DaemonIdmSnoopPacket>,
#[derive(Clone)]
pub struct DaemonIdmSubscribeHandle {
domid: u32,
listeners: ListenerMap,
}
impl DaemonIdmSubscribeHandle {
pub async fn unsubscribe(&self) -> Result<()> {
let mut guard = self.listeners.lock().await;
let _ = guard.remove(&self.domid);
Ok(())
}
} }
impl DaemonIdmHandle { impl DaemonIdmHandle {
pub async fn subscribe( pub fn snoop(&self) -> broadcast::Receiver<DaemonIdmSnoopPacket> {
&self, self.snoop_sender.subscribe()
domid: u32, }
sender: Sender<(u32, IdmPacket)>,
) -> Result<DaemonIdmSubscribeHandle> { pub async fn client(&self, uuid: Uuid) -> Result<IdmInternalClient> {
let mut guard = self.listeners.lock().await; let Some(domid) = self.glt.lookup_domid_by_uuid(&uuid).await else {
guard.insert(domid, sender); return Err(anyhow!("unable to find domain {}", uuid));
Ok(DaemonIdmSubscribeHandle { };
domid, self.client_by_domid(domid).await
listeners: self.listeners.clone(), }
})
pub async fn client_by_domid(&self, domid: u32) -> Result<IdmInternalClient> {
client_or_create(domid, &self.tx_sender, &self.clients, &self.feeds).await
} }
} }
@ -59,70 +64,141 @@ impl Drop for DaemonIdmHandle {
} }
} }
#[derive(Clone)]
pub struct DaemonIdmSnoopPacket {
pub from: u32,
pub to: u32,
pub packet: IdmTransportPacket,
}
pub struct DaemonIdm { pub struct DaemonIdm {
listeners: ListenerMap, glt: GuestLookupTable,
receiver: Receiver<(u32, Vec<u8>)>, clients: ClientMap,
feeds: BackendFeedMap,
tx_sender: Sender<(u32, IdmTransportPacket)>,
tx_raw_sender: Sender<(u32, Vec<u8>)>,
tx_receiver: Receiver<(u32, IdmTransportPacket)>,
rx_receiver: Receiver<(u32, Option<Vec<u8>>)>,
snoop_sender: broadcast::Sender<DaemonIdmSnoopPacket>,
task: JoinHandle<()>, task: JoinHandle<()>,
} }
impl DaemonIdm { impl DaemonIdm {
pub async fn new() -> Result<DaemonIdm> { pub async fn new(glt: GuestLookupTable) -> Result<DaemonIdm> {
let (service, _, receiver) = ChannelService::new("krata-channel".to_string(), None).await?; let (service, tx_raw_sender, rx_receiver) =
ChannelService::new("krata-channel".to_string(), None).await?;
let (tx_sender, tx_receiver) = channel(100);
let (snoop_sender, _) = broadcast::channel(100);
let task = service.launch().await?; let task = service.launch().await?;
let listeners = Arc::new(Mutex::new(HashMap::new())); let clients = Arc::new(Mutex::new(HashMap::new()));
let feeds = Arc::new(Mutex::new(HashMap::new()));
Ok(DaemonIdm { Ok(DaemonIdm {
receiver, glt,
rx_receiver,
tx_receiver,
tx_sender,
tx_raw_sender,
snoop_sender,
task, task,
listeners, clients,
feeds,
}) })
} }
pub async fn launch(mut self) -> Result<DaemonIdmHandle> { pub async fn launch(mut self) -> Result<DaemonIdmHandle> {
let listeners = self.listeners.clone(); let glt = self.glt.clone();
let clients = self.clients.clone();
let feeds = self.feeds.clone();
let tx_sender = self.tx_sender.clone();
let snoop_sender = self.snoop_sender.clone();
let task = tokio::task::spawn(async move { let task = tokio::task::spawn(async move {
let mut buffers: HashMap<u32, BytesMut> = HashMap::new(); let mut buffers: HashMap<u32, BytesMut> = HashMap::new();
if let Err(error) = self.process(&mut buffers).await {
while let Err(error) = self.process(&mut buffers).await {
error!("failed to process idm: {}", error); error!("failed to process idm: {}", error);
} }
}); });
Ok(DaemonIdmHandle { Ok(DaemonIdmHandle {
listeners, glt,
clients,
feeds,
tx_sender,
snoop_sender,
task: Arc::new(task), task: Arc::new(task),
}) })
} }
async fn process(&mut self, buffers: &mut HashMap<u32, BytesMut>) -> Result<()> { async fn process(&mut self, buffers: &mut HashMap<u32, BytesMut>) -> Result<()> {
loop { loop {
let Some((domid, data)) = self.receiver.recv().await else { select! {
break; x = self.rx_receiver.recv() => match x {
}; Some((domid, data)) => {
if let Some(data) = data {
let buffer = buffers.entry(domid).or_insert_with_key(|_| BytesMut::new());
buffer.extend_from_slice(&data);
if buffer.len() < 6 {
continue;
}
let buffer = buffers.entry(domid).or_insert_with_key(|_| BytesMut::new()); if buffer[0] != 0xff || buffer[1] != 0xff {
buffer.extend_from_slice(&data); buffer.clear();
if buffer.len() < 2 { continue;
continue; }
}
let size = (buffer[0] as u16 | (buffer[1] as u16) << 8) as usize; let size = (buffer[2] as u32 | (buffer[3] as u32) << 8 | (buffer[4] as u32) << 16 | (buffer[5] as u32) << 24) as usize;
let needed = size + 2; let needed = size + 6;
if buffer.len() < needed { if buffer.len() < needed {
continue; continue;
} }
let mut packet = buffer.split_to(needed); let mut packet = buffer.split_to(needed);
packet.advance(2); packet.advance(6);
match IdmPacket::decode(packet) { match IdmTransportPacket::decode(packet) {
Ok(packet) => { Ok(packet) => {
let guard = self.listeners.lock().await; let _ = client_or_create(domid, &self.tx_sender, &self.clients, &self.feeds).await?;
if let Some(sender) = guard.get(&domid) { let guard = self.feeds.lock().await;
if let Err(error) = sender.try_send((domid, packet)) { if let Some(feed) = guard.get(&domid) {
warn!("dropped idm packet from domain {}: {}", domid, error); let _ = feed.try_send(packet.clone());
}
let _ = self.snoop_sender.send(DaemonIdmSnoopPacket { from: domid, to: 0, packet });
}
Err(packet) => {
warn!("received invalid packet from domain {}: {}", domid, packet);
}
}
} else {
let mut clients = self.clients.lock().await;
let mut feeds = self.feeds.lock().await;
clients.remove(&domid);
feeds.remove(&domid);
} }
},
None => {
break;
}
},
x = self.tx_receiver.recv() => match x {
Some((domid, packet)) => {
let data = packet.encode_to_vec();
let mut buffer = vec![0u8; 6];
let length = data.len() as u32;
buffer[0] = 0xff;
buffer[1] = 0xff;
buffer[2] = length as u8;
buffer[3] = (length << 8) as u8;
buffer[4] = (length << 16) as u8;
buffer[5] = (length << 24) as u8;
buffer.extend_from_slice(&data);
self.tx_raw_sender.send((domid, buffer)).await?;
let _ = self.snoop_sender.send(DaemonIdmSnoopPacket { from: 0, to: domid, packet });
},
None => {
break;
} }
} }
};
Err(packet) => {
warn!("received invalid packet from domain {}: {}", domid, packet);
}
}
} }
Ok(()) Ok(())
} }
@ -133,3 +209,54 @@ impl Drop for DaemonIdm {
self.task.abort(); self.task.abort();
} }
} }
async fn client_or_create(
domid: u32,
tx_sender: &Sender<(u32, IdmTransportPacket)>,
clients: &ClientMap,
feeds: &BackendFeedMap,
) -> Result<IdmInternalClient> {
let mut clients = clients.lock().await;
let mut feeds = feeds.lock().await;
match clients.entry(domid) {
Entry::Occupied(entry) => Ok(entry.get().clone()),
Entry::Vacant(entry) => {
let (rx_sender, rx_receiver) = channel(100);
feeds.insert(domid, rx_sender);
let backend = IdmDaemonBackend {
domid,
rx_receiver,
tx_sender: tx_sender.clone(),
};
let client = IdmInternalClient::new(
INTERNAL_IDM_CHANNEL,
Box::new(backend) as Box<dyn IdmBackend>,
)
.await?;
entry.insert(client.clone());
Ok(client)
}
}
}
pub struct IdmDaemonBackend {
domid: u32,
rx_receiver: Receiver<IdmTransportPacket>,
tx_sender: Sender<(u32, IdmTransportPacket)>,
}
#[async_trait::async_trait]
impl IdmBackend for IdmDaemonBackend {
async fn recv(&mut self) -> Result<IdmTransportPacket> {
if let Some(packet) = self.rx_receiver.recv().await {
Ok(packet)
} else {
Err(anyhow!("idm receive channel closed"))
}
}
async fn send(&mut self, packet: IdmTransportPacket) -> Result<()> {
self.tx_sender.send((self.domid, packet)).await?;
Ok(())
}
}

View File

@ -1,16 +1,19 @@
use std::{net::SocketAddr, path::PathBuf, str::FromStr}; use std::{net::SocketAddr, path::PathBuf, str::FromStr};
use anyhow::Result; use anyhow::{anyhow, Result};
use console::{DaemonConsole, DaemonConsoleHandle}; use console::{DaemonConsole, DaemonConsoleHandle};
use control::RuntimeControlService; use control::DaemonControlService;
use db::GuestStore; use db::GuestStore;
use event::{DaemonEventContext, DaemonEventGenerator}; use event::{DaemonEventContext, DaemonEventGenerator};
use glt::GuestLookupTable;
use idm::{DaemonIdm, DaemonIdmHandle}; use idm::{DaemonIdm, DaemonIdmHandle};
use krata::{dial::ControlDialAddress, v1::control::control_service_server::ControlServiceServer}; use krata::{dial::ControlDialAddress, v1::control::control_service_server::ControlServiceServer};
use krataoci::{packer::service::OciPackerService, registry::OciPlatform};
use kratart::Runtime; use kratart::Runtime;
use log::info; use log::info;
use reconcile::guest::GuestReconciler; use reconcile::guest::GuestReconciler;
use tokio::{ use tokio::{
fs,
net::UnixListener, net::UnixListener,
sync::mpsc::{channel, Sender}, sync::mpsc::{channel, Sender},
task::JoinHandle, task::JoinHandle,
@ -19,67 +22,113 @@ use tokio_stream::wrappers::UnixListenerStream;
use tonic::transport::{Identity, Server, ServerTlsConfig}; use tonic::transport::{Identity, Server, ServerTlsConfig};
use uuid::Uuid; use uuid::Uuid;
pub mod command;
pub mod console; pub mod console;
pub mod control; pub mod control;
pub mod db; pub mod db;
pub mod event; pub mod event;
pub mod glt;
pub mod idm; pub mod idm;
pub mod metrics;
pub mod oci;
pub mod reconcile; pub mod reconcile;
pub struct Daemon { pub struct Daemon {
store: String, store: String,
glt: GuestLookupTable,
guests: GuestStore, guests: GuestStore,
events: DaemonEventContext, events: DaemonEventContext,
guest_reconciler_task: JoinHandle<()>, guest_reconciler_task: JoinHandle<()>,
guest_reconciler_notify: Sender<Uuid>, guest_reconciler_notify: Sender<Uuid>,
generator_task: JoinHandle<()>, generator_task: JoinHandle<()>,
_idm: DaemonIdmHandle, idm: DaemonIdmHandle,
console: DaemonConsoleHandle, console: DaemonConsoleHandle,
packer: OciPackerService,
} }
const GUEST_RECONCILER_QUEUE_LEN: usize = 1000; const GUEST_RECONCILER_QUEUE_LEN: usize = 1000;
impl Daemon { impl Daemon {
pub async fn new(store: String, runtime: Runtime) -> Result<Self> { pub async fn new(store: String) -> Result<Self> {
let mut image_cache_dir = PathBuf::from(store.clone());
image_cache_dir.push("cache");
image_cache_dir.push("image");
fs::create_dir_all(&image_cache_dir).await?;
let mut host_uuid_path = PathBuf::from(store.clone());
host_uuid_path.push("host.uuid");
let host_uuid = if host_uuid_path.is_file() {
let content = fs::read_to_string(&host_uuid_path).await?;
Uuid::from_str(content.trim()).ok()
} else {
None
};
let host_uuid = if let Some(host_uuid) = host_uuid {
host_uuid
} else {
let generated = Uuid::new_v4();
let mut string = generated.to_string();
string.push('\n');
fs::write(&host_uuid_path, string).await?;
generated
};
let initrd_path = detect_guest_file(&store, "initrd")?;
let kernel_path = detect_guest_file(&store, "kernel")?;
let packer = OciPackerService::new(None, &image_cache_dir, OciPlatform::current()).await?;
let runtime = Runtime::new().await?;
let glt = GuestLookupTable::new(0, host_uuid);
let guests_db_path = format!("{}/guests.db", store); let guests_db_path = format!("{}/guests.db", store);
let guests = GuestStore::open(&PathBuf::from(guests_db_path))?; let guests = GuestStore::open(&PathBuf::from(guests_db_path))?;
let (guest_reconciler_notify, guest_reconciler_receiver) = let (guest_reconciler_notify, guest_reconciler_receiver) =
channel::<Uuid>(GUEST_RECONCILER_QUEUE_LEN); channel::<Uuid>(GUEST_RECONCILER_QUEUE_LEN);
let idm = DaemonIdm::new().await?; let idm = DaemonIdm::new(glt.clone()).await?;
let idm = idm.launch().await?; let idm = idm.launch().await?;
let console = DaemonConsole::new().await?; let console = DaemonConsole::new(glt.clone()).await?;
let console = console.launch().await?; let console = console.launch().await?;
let (events, generator) = let (events, generator) =
DaemonEventGenerator::new(guests.clone(), guest_reconciler_notify.clone(), idm.clone()) DaemonEventGenerator::new(guests.clone(), guest_reconciler_notify.clone(), idm.clone())
.await?; .await?;
let runtime_for_reconciler = runtime.dupe().await?; let runtime_for_reconciler = runtime.dupe().await?;
let guest_reconciler = GuestReconciler::new( let guest_reconciler = GuestReconciler::new(
glt.clone(),
guests.clone(), guests.clone(),
events.clone(), events.clone(),
runtime_for_reconciler, runtime_for_reconciler,
packer.clone(),
guest_reconciler_notify.clone(), guest_reconciler_notify.clone(),
kernel_path,
initrd_path,
)?; )?;
let guest_reconciler_task = guest_reconciler.launch(guest_reconciler_receiver).await?; let guest_reconciler_task = guest_reconciler.launch(guest_reconciler_receiver).await?;
let generator_task = generator.launch().await?; let generator_task = generator.launch().await?;
Ok(Self { Ok(Self {
store, store,
glt,
guests, guests,
events, events,
guest_reconciler_task, guest_reconciler_task,
guest_reconciler_notify, guest_reconciler_notify,
generator_task, generator_task,
_idm: idm, idm,
console, console,
packer,
}) })
} }
pub async fn listen(&mut self, addr: ControlDialAddress) -> Result<()> { pub async fn listen(&mut self, addr: ControlDialAddress) -> Result<()> {
let control_service = RuntimeControlService::new( let control_service = DaemonControlService::new(
self.glt.clone(),
self.events.clone(), self.events.clone(),
self.console.clone(), self.console.clone(),
self.idm.clone(),
self.guests.clone(), self.guests.clone(),
self.guest_reconciler_notify.clone(), self.guest_reconciler_notify.clone(),
self.packer.clone(),
); );
let mut server = Server::builder(); let mut server = Server::builder();
@ -105,7 +154,7 @@ impl Daemon {
ControlDialAddress::UnixSocket { path } => { ControlDialAddress::UnixSocket { path } => {
let path = PathBuf::from(path); let path = PathBuf::from(path);
if path.exists() { if path.exists() {
tokio::fs::remove_file(&path).await?; fs::remove_file(&path).await?;
} }
let listener = UnixListener::bind(path)?; let listener = UnixListener::bind(path)?;
let stream = UnixListenerStream::new(listener); let stream = UnixListenerStream::new(listener);
@ -136,3 +185,16 @@ impl Drop for Daemon {
self.generator_task.abort(); self.generator_task.abort();
} }
} }
fn detect_guest_file(store: &str, name: &str) -> Result<PathBuf> {
let mut path = PathBuf::from(format!("{}/guest/{}", store, name));
if path.is_file() {
return Ok(path);
}
path = PathBuf::from(format!("/usr/share/krata/guest/{}", name));
if path.is_file() {
return Ok(path);
}
Err(anyhow!("unable to find required guest file: {}", name))
}

View File

@ -0,0 +1,27 @@
use krata::{
idm::internal::{MetricFormat, MetricNode},
v1::common::{GuestMetricFormat, GuestMetricNode},
};
fn idm_metric_format_to_api(format: MetricFormat) -> GuestMetricFormat {
match format {
MetricFormat::Unknown => GuestMetricFormat::Unknown,
MetricFormat::Bytes => GuestMetricFormat::Bytes,
MetricFormat::Integer => GuestMetricFormat::Integer,
MetricFormat::DurationSeconds => GuestMetricFormat::DurationSeconds,
}
}
pub fn idm_metric_to_api(node: MetricNode) -> GuestMetricNode {
let format = node.format();
GuestMetricNode {
name: node.name,
value: node.value,
format: idm_metric_format_to_api(format).into(),
children: node
.children
.into_iter()
.map(idm_metric_to_api)
.collect::<Vec<_>>(),
}
}

79
crates/daemon/src/oci.rs Normal file
View File

@ -0,0 +1,79 @@
use krata::v1::control::{
image_progress_indication::Indication, ImageProgress, ImageProgressIndication,
ImageProgressIndicationBar, ImageProgressIndicationCompleted, ImageProgressIndicationHidden,
ImageProgressIndicationSpinner, ImageProgressLayer, ImageProgressLayerPhase,
ImageProgressPhase,
};
use krataoci::progress::{
OciProgress, OciProgressIndication, OciProgressLayer, OciProgressLayerPhase, OciProgressPhase,
};
fn convert_oci_progress_indication(indication: OciProgressIndication) -> ImageProgressIndication {
ImageProgressIndication {
indication: Some(match indication {
OciProgressIndication::Hidden => Indication::Hidden(ImageProgressIndicationHidden {}),
OciProgressIndication::ProgressBar {
message,
current,
total,
bytes,
} => Indication::Bar(ImageProgressIndicationBar {
message: message.unwrap_or_default(),
current,
total,
is_bytes: bytes,
}),
OciProgressIndication::Spinner { message } => {
Indication::Spinner(ImageProgressIndicationSpinner {
message: message.unwrap_or_default(),
})
}
OciProgressIndication::Completed {
message,
total,
bytes,
} => Indication::Completed(ImageProgressIndicationCompleted {
message: message.unwrap_or_default(),
total: total.unwrap_or(0),
is_bytes: bytes,
}),
}),
}
}
fn convert_oci_layer_progress(layer: OciProgressLayer) -> ImageProgressLayer {
ImageProgressLayer {
id: layer.id,
phase: match layer.phase {
OciProgressLayerPhase::Waiting => ImageProgressLayerPhase::Waiting,
OciProgressLayerPhase::Downloading => ImageProgressLayerPhase::Downloading,
OciProgressLayerPhase::Downloaded => ImageProgressLayerPhase::Downloaded,
OciProgressLayerPhase::Extracting => ImageProgressLayerPhase::Extracting,
OciProgressLayerPhase::Extracted => ImageProgressLayerPhase::Extracted,
}
.into(),
indication: Some(convert_oci_progress_indication(layer.indication)),
}
}
pub fn convert_oci_progress(oci: OciProgress) -> ImageProgress {
ImageProgress {
phase: match oci.phase {
OciProgressPhase::Started => ImageProgressPhase::Started,
OciProgressPhase::Resolving => ImageProgressPhase::Resolving,
OciProgressPhase::Resolved => ImageProgressPhase::Resolved,
OciProgressPhase::ConfigDownload => ImageProgressPhase::ConfigDownload,
OciProgressPhase::LayerDownload => ImageProgressPhase::LayerDownload,
OciProgressPhase::Assemble => ImageProgressPhase::Assemble,
OciProgressPhase::Pack => ImageProgressPhase::Pack,
OciProgressPhase::Complete => ImageProgressPhase::Complete,
}
.into(),
layers: oci
.layers
.into_values()
.map(convert_oci_layer_progress)
.collect::<Vec<_>>(),
indication: Some(convert_oci_progress_indication(oci.indication)),
}
}

View File

@ -1,18 +1,17 @@
use std::{ use std::{
collections::{hash_map::Entry, HashMap}, collections::{hash_map::Entry, HashMap},
path::PathBuf,
sync::Arc, sync::Arc,
time::Duration, time::Duration,
}; };
use anyhow::{anyhow, Result}; use anyhow::Result;
use krata::v1::{ use krata::v1::{
common::{ common::{Guest, GuestErrorInfo, GuestExitInfo, GuestNetworkState, GuestState, GuestStatus},
guest_image_spec::Image, Guest, GuestErrorInfo, GuestExitInfo, GuestNetworkState,
GuestState, GuestStatus,
},
control::GuestChangedEvent, control::GuestChangedEvent,
}; };
use kratart::{launch::GuestLaunchRequest, GuestInfo, Runtime}; use krataoci::packer::service::OciPackerService;
use kratart::{GuestInfo, Runtime};
use log::{error, info, trace, warn}; use log::{error, info, trace, warn};
use tokio::{ use tokio::{
select, select,
@ -28,8 +27,13 @@ use uuid::Uuid;
use crate::{ use crate::{
db::GuestStore, db::GuestStore,
event::{DaemonEvent, DaemonEventContext}, event::{DaemonEvent, DaemonEventContext},
glt::GuestLookupTable,
}; };
use self::start::GuestStarter;
mod start;
const PARALLEL_LIMIT: u32 = 5; const PARALLEL_LIMIT: u32 = 5;
#[derive(Debug)] #[derive(Debug)]
@ -51,25 +55,38 @@ impl Drop for GuestReconcilerEntry {
#[derive(Clone)] #[derive(Clone)]
pub struct GuestReconciler { pub struct GuestReconciler {
glt: GuestLookupTable,
guests: GuestStore, guests: GuestStore,
events: DaemonEventContext, events: DaemonEventContext,
runtime: Runtime, runtime: Runtime,
packer: OciPackerService,
kernel_path: PathBuf,
initrd_path: PathBuf,
tasks: Arc<Mutex<HashMap<Uuid, GuestReconcilerEntry>>>, tasks: Arc<Mutex<HashMap<Uuid, GuestReconcilerEntry>>>,
guest_reconciler_notify: Sender<Uuid>, guest_reconciler_notify: Sender<Uuid>,
reconcile_lock: Arc<RwLock<()>>, reconcile_lock: Arc<RwLock<()>>,
} }
impl GuestReconciler { impl GuestReconciler {
#[allow(clippy::too_many_arguments)]
pub fn new( pub fn new(
glt: GuestLookupTable,
guests: GuestStore, guests: GuestStore,
events: DaemonEventContext, events: DaemonEventContext,
runtime: Runtime, runtime: Runtime,
packer: OciPackerService,
guest_reconciler_notify: Sender<Uuid>, guest_reconciler_notify: Sender<Uuid>,
kernel_path: PathBuf,
initrd_path: PathBuf,
) -> Result<Self> { ) -> Result<Self> {
Ok(Self { Ok(Self {
glt,
guests, guests,
events, events,
runtime, runtime,
packer,
kernel_path,
initrd_path,
tasks: Arc::new(Mutex::new(HashMap::new())), tasks: Arc::new(Mutex::new(HashMap::new())),
guest_reconciler_notify, guest_reconciler_notify,
reconcile_lock: Arc::new(RwLock::with_max_readers((), PARALLEL_LIMIT)), reconcile_lock: Arc::new(RwLock::with_max_readers((), PARALLEL_LIMIT)),
@ -118,6 +135,23 @@ impl GuestReconciler {
trace!("reconciling runtime"); trace!("reconciling runtime");
let runtime_guests = self.runtime.list().await?; let runtime_guests = self.runtime.list().await?;
let stored_guests = self.guests.list().await?; let stored_guests = self.guests.list().await?;
let non_existent_guests = runtime_guests
.iter()
.filter(|x| !stored_guests.iter().any(|g| *g.0 == x.uuid))
.collect::<Vec<_>>();
for guest in non_existent_guests {
warn!("destroying unknown runtime guest {}", guest.uuid);
if let Err(error) = self.runtime.destroy(guest.uuid).await {
error!(
"failed to destroy unknown runtime guest {}: {}",
guest.uuid, error
);
}
self.guests.remove(guest.uuid).await?;
}
for (uuid, mut stored_guest) in stored_guests { for (uuid, mut stored_guest) in stored_guests {
let previous_guest = stored_guest.clone(); let previous_guest = stored_guest.clone();
let runtime_guest = runtime_guests.iter().find(|x| x.uuid == uuid); let runtime_guest = runtime_guests.iter().find(|x| x.uuid == uuid);
@ -131,6 +165,7 @@ impl GuestReconciler {
} }
Some(runtime) => { Some(runtime) => {
self.glt.associate(uuid, runtime.domid).await;
let mut state = stored_guest.state.as_mut().cloned().unwrap_or_default(); let mut state = stored_guest.state.as_mut().cloned().unwrap_or_default();
if let Some(code) = runtime.state.exit_code { if let Some(code) = runtime.state.exit_code {
state.status = GuestStatus::Exited.into(); state.status = GuestStatus::Exited.into();
@ -219,52 +254,14 @@ impl GuestReconciler {
} }
async fn start(&self, uuid: Uuid, guest: &mut Guest) -> Result<GuestReconcilerResult> { async fn start(&self, uuid: Uuid, guest: &mut Guest) -> Result<GuestReconcilerResult> {
let Some(ref spec) = guest.spec else { let starter = GuestStarter {
return Err(anyhow!("guest spec not specified")); kernel_path: &self.kernel_path,
initrd_path: &self.initrd_path,
packer: &self.packer,
glt: &self.glt,
runtime: &self.runtime,
}; };
starter.start(uuid, guest).await
let Some(ref image) = spec.image else {
return Err(anyhow!("image spec not provided"));
};
let oci = match image.image {
Some(Image::Oci(ref oci)) => oci,
None => {
return Err(anyhow!("oci spec not specified"));
}
};
let task = spec.task.as_ref().cloned().unwrap_or_default();
let info = self
.runtime
.launch(GuestLaunchRequest {
uuid: Some(uuid),
name: if spec.name.is_empty() {
None
} else {
Some(&spec.name)
},
image: &oci.image,
vcpus: spec.vcpus,
mem: spec.mem,
env: task
.environment
.iter()
.map(|x| (x.key.clone(), x.value.clone()))
.collect::<HashMap<_, _>>(),
run: empty_vec_optional(task.command.clone()),
debug: false,
})
.await?;
info!("started guest {}", uuid);
guest.state = Some(GuestState {
status: GuestStatus::Started.into(),
network: Some(guestinfo_to_networkstate(&info)),
exit_info: None,
error_info: None,
domid: info.domid,
});
Ok(GuestReconcilerResult::Changed { rerun: false })
} }
async fn exited(&self, guest: &mut Guest) -> Result<GuestReconcilerResult> { async fn exited(&self, guest: &mut Guest) -> Result<GuestReconcilerResult> {
@ -281,13 +278,20 @@ impl GuestReconciler {
trace!("failed to destroy runtime guest {}: {}", uuid, error); trace!("failed to destroy runtime guest {}: {}", uuid, error);
} }
let domid = guest.state.as_ref().map(|x| x.domid);
if let Some(domid) = domid {
self.glt.remove(uuid, domid).await;
}
info!("destroyed guest {}", uuid); info!("destroyed guest {}", uuid);
guest.state = Some(GuestState { guest.state = Some(GuestState {
status: GuestStatus::Destroyed.into(), status: GuestStatus::Destroyed.into(),
network: None, network: None,
exit_info: None, exit_info: None,
error_info: None, error_info: None,
domid: guest.state.as_ref().map(|x| x.domid).unwrap_or(u32::MAX), host: self.glt.host_uuid().to_string(),
domid: domid.unwrap_or(u32::MAX),
}); });
Ok(GuestReconcilerResult::Changed { rerun: false }) Ok(GuestReconcilerResult::Changed { rerun: false })
} }
@ -332,15 +336,7 @@ impl GuestReconciler {
} }
} }
fn empty_vec_optional<T>(value: Vec<T>) -> Option<Vec<T>> { pub fn guestinfo_to_networkstate(info: &GuestInfo) -> GuestNetworkState {
if value.is_empty() {
None
} else {
Some(value)
}
}
fn guestinfo_to_networkstate(info: &GuestInfo) -> GuestNetworkState {
GuestNetworkState { GuestNetworkState {
guest_ipv4: info.guest_ipv4.map(|x| x.to_string()).unwrap_or_default(), guest_ipv4: info.guest_ipv4.map(|x| x.to_string()).unwrap_or_default(),
guest_ipv6: info.guest_ipv6.map(|x| x.to_string()).unwrap_or_default(), guest_ipv6: info.guest_ipv6.map(|x| x.to_string()).unwrap_or_default(),

View File

@ -0,0 +1,182 @@
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Result};
use futures::StreamExt;
use krata::launchcfg::LaunchPackedFormat;
use krata::v1::common::GuestOciImageSpec;
use krata::v1::common::{guest_image_spec::Image, Guest, GuestState, GuestStatus, OciImageFormat};
use krataoci::packer::{service::OciPackerService, OciPackedFormat};
use kratart::{launch::GuestLaunchRequest, Runtime};
use log::info;
use tokio::fs::{self, File};
use tokio::io::AsyncReadExt;
use tokio_tar::Archive;
use uuid::Uuid;
use crate::{
glt::GuestLookupTable,
reconcile::guest::{guestinfo_to_networkstate, GuestReconcilerResult},
};
// if a kernel is >= 100MB, that's kinda scary.
const OCI_SPEC_TAR_FILE_MAX_SIZE: usize = 100 * 1024 * 1024;
pub struct GuestStarter<'a> {
pub kernel_path: &'a Path,
pub initrd_path: &'a Path,
pub packer: &'a OciPackerService,
pub glt: &'a GuestLookupTable,
pub runtime: &'a Runtime,
}
impl GuestStarter<'_> {
pub async fn oci_spec_tar_read_file(
&self,
file: &Path,
oci: &GuestOciImageSpec,
) -> Result<Vec<u8>> {
if oci.format() != OciImageFormat::Tar {
return Err(anyhow!(
"oci image spec for {} is required to be in tar format",
oci.digest
));
}
let image = self
.packer
.recall(&oci.digest, OciPackedFormat::Tar)
.await?;
let Some(image) = image else {
return Err(anyhow!("image {} was not found in tar format", oci.digest));
};
let mut archive = Archive::new(File::open(&image.path).await?);
let mut entries = archive.entries()?;
while let Some(entry) = entries.next().await {
let mut entry = entry?;
let path = entry.path()?;
if entry.header().size()? as usize > OCI_SPEC_TAR_FILE_MAX_SIZE {
return Err(anyhow!(
"file {} in image {} is larger than the size limit",
file.to_string_lossy(),
oci.digest
));
}
if path == file {
let mut buffer = Vec::new();
entry.read_to_end(&mut buffer).await?;
return Ok(buffer);
}
}
Err(anyhow!(
"unable to find file {} in image {}",
file.to_string_lossy(),
oci.digest
))
}
pub async fn start(&self, uuid: Uuid, guest: &mut Guest) -> Result<GuestReconcilerResult> {
let Some(ref spec) = guest.spec else {
return Err(anyhow!("guest spec not specified"));
};
let Some(ref image) = spec.image else {
return Err(anyhow!("image spec not provided"));
};
let oci = match image.image {
Some(Image::Oci(ref oci)) => oci,
None => {
return Err(anyhow!("oci spec not specified"));
}
};
let task = spec.task.as_ref().cloned().unwrap_or_default();
let image = self
.packer
.recall(
&oci.digest,
match oci.format() {
OciImageFormat::Unknown => OciPackedFormat::Squashfs,
OciImageFormat::Squashfs => OciPackedFormat::Squashfs,
OciImageFormat::Erofs => OciPackedFormat::Erofs,
OciImageFormat::Tar => {
return Err(anyhow!("tar image format is not supported for guests"));
}
},
)
.await?;
let Some(image) = image else {
return Err(anyhow!(
"image {} in the requested format did not exist",
oci.digest
));
};
let kernel = if let Some(ref spec) = spec.kernel {
let Some(Image::Oci(ref oci)) = spec.image else {
return Err(anyhow!("kernel image spec must be an oci image"));
};
self.oci_spec_tar_read_file(&PathBuf::from("kernel/image"), oci)
.await?
} else {
fs::read(&self.kernel_path).await?
};
let initrd = if let Some(ref spec) = spec.initrd {
let Some(Image::Oci(ref oci)) = spec.image else {
return Err(anyhow!("initrd image spec must be an oci image"));
};
self.oci_spec_tar_read_file(&PathBuf::from("krata/initrd"), oci)
.await?
} else {
fs::read(&self.initrd_path).await?
};
let info = self
.runtime
.launch(GuestLaunchRequest {
format: LaunchPackedFormat::Squashfs,
uuid: Some(uuid),
name: if spec.name.is_empty() {
None
} else {
Some(spec.name.clone())
},
image,
kernel,
initrd,
vcpus: spec.vcpus,
mem: spec.mem,
env: task
.environment
.iter()
.map(|x| (x.key.clone(), x.value.clone()))
.collect::<HashMap<_, _>>(),
run: empty_vec_optional(task.command.clone()),
debug: false,
})
.await?;
self.glt.associate(uuid, info.domid).await;
info!("started guest {}", uuid);
guest.state = Some(GuestState {
status: GuestStatus::Started.into(),
network: Some(guestinfo_to_networkstate(&info)),
exit_info: None,
error_info: None,
host: self.glt.host_uuid().to_string(),
domid: info.domid,
});
Ok(GuestReconcilerResult::Changed { rerun: false })
}
}
fn empty_vec_optional<T>(value: Vec<T>) -> Option<Vec<T>> {
if value.is_empty() {
None
} else {
Some(value)
}
}

View File

@ -14,8 +14,8 @@ cgroups-rs = { workspace = true }
env_logger = { workspace = true } env_logger = { workspace = true }
futures = { workspace = true } futures = { workspace = true }
ipnetwork = { workspace = true } ipnetwork = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" } krata = { path = "../krata", version = "^0.0.10" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.7" } krata-xenstore = { path = "../xen/xenstore", version = "^0.0.10" }
libc = { workspace = true } libc = { workspace = true }
log = { workspace = true } log = { workspace = true }
nix = { workspace = true, features = ["ioctl", "process", "fs"] } nix = { workspace = true, features = ["ioctl", "process", "fs"] }
@ -25,8 +25,8 @@ rtnetlink = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
sys-mount = { workspace = true } sys-mount = { workspace = true }
sysinfo = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
walkdir = { workspace = true }
[lib] [lib]
name = "krataguest" name = "krataguest"

View File

@ -23,6 +23,8 @@ async fn main() -> Result<()> {
if let Err(error) = guest.init().await { if let Err(error) = guest.init().await {
error!("failed to initialize guest: {}", error); error!("failed to initialize guest: {}", error);
death(127).await?; death(127).await?;
return Ok(());
} }
death(1).await?;
Ok(()) Ok(())
} }

View File

@ -1,26 +1,36 @@
use crate::{ use crate::{
childwait::{ChildEvent, ChildWait}, childwait::{ChildEvent, ChildWait},
death, death,
exec::GuestExecTask,
metrics::MetricsCollector,
}; };
use anyhow::Result; use anyhow::Result;
use cgroups_rs::Cgroup; use cgroups_rs::Cgroup;
use krata::idm::{ use krata::idm::{
client::IdmClient, client::{IdmClientStreamResponseHandle, IdmInternalClient},
protocol::{idm_event::Event, IdmEvent, IdmExitEvent, IdmPacket}, internal::{
event::Event as EventType, request::Request as RequestType,
response::Response as ResponseType, Event, ExecStreamResponseUpdate, ExitEvent,
MetricsResponse, PingResponse, Request, Response,
},
}; };
use log::error; use log::debug;
use nix::unistd::Pid; use nix::unistd::Pid;
use tokio::select; use tokio::{select, sync::broadcast};
pub struct GuestBackground { pub struct GuestBackground {
idm: IdmClient, idm: IdmInternalClient,
child: Pid, child: Pid,
_cgroup: Cgroup, _cgroup: Cgroup,
wait: ChildWait, wait: ChildWait,
} }
impl GuestBackground { impl GuestBackground {
pub async fn new(idm: IdmClient, cgroup: Cgroup, child: Pid) -> Result<GuestBackground> { pub async fn new(
idm: IdmInternalClient,
cgroup: Cgroup,
child: Pid,
) -> Result<GuestBackground> {
Ok(GuestBackground { Ok(GuestBackground {
idm, idm,
child, child,
@ -30,16 +40,52 @@ impl GuestBackground {
} }
pub async fn run(&mut self) -> Result<()> { pub async fn run(&mut self) -> Result<()> {
let mut event_subscription = self.idm.subscribe().await?;
let mut requests_subscription = self.idm.requests().await?;
let mut request_streams_subscription = self.idm.request_streams().await?;
loop { loop {
select! { select! {
x = self.idm.receiver.recv() => match x { x = event_subscription.recv() => match x {
Some(_packet) => { Ok(_event) => {
}, },
None => { Err(broadcast::error::RecvError::Closed) => {
error!("idm packet channel closed"); debug!("idm packet channel closed");
break; break;
},
_ => {
continue;
}
},
x = requests_subscription.recv() => match x {
Ok((id, request)) => {
self.handle_idm_request(id, request).await?;
},
Err(broadcast::error::RecvError::Closed) => {
debug!("idm packet channel closed");
break;
},
_ => {
continue;
}
},
x = request_streams_subscription.recv() => match x {
Ok(handle) => {
self.handle_idm_stream_request(handle).await?;
},
Err(broadcast::error::RecvError::Closed) => {
debug!("idm packet channel closed");
break;
},
_ => {
continue;
} }
}, },
@ -54,14 +100,65 @@ impl GuestBackground {
Ok(()) Ok(())
} }
async fn handle_idm_request(&mut self, id: u64, packet: Request) -> Result<()> {
match packet.request {
Some(RequestType::Ping(_)) => {
self.idm
.respond(
id,
Response {
response: Some(ResponseType::Ping(PingResponse {})),
},
)
.await?;
}
Some(RequestType::Metrics(_)) => {
let metrics = MetricsCollector::new()?;
let root = metrics.collect()?;
let response = Response {
response: Some(ResponseType::Metrics(MetricsResponse { root: Some(root) })),
};
self.idm.respond(id, response).await?;
}
_ => {}
}
Ok(())
}
async fn handle_idm_stream_request(
&mut self,
handle: IdmClientStreamResponseHandle<Request>,
) -> Result<()> {
if let Some(RequestType::ExecStream(_)) = &handle.initial.request {
tokio::task::spawn(async move {
let exec = GuestExecTask { handle };
if let Err(error) = exec.run().await {
let _ = exec
.handle
.respond(Response {
response: Some(ResponseType::ExecStream(ExecStreamResponseUpdate {
exited: true,
error: error.to_string(),
exit_code: -1,
stdout: vec![],
stderr: vec![],
})),
})
.await;
}
});
}
Ok(())
}
async fn child_event(&mut self, event: ChildEvent) -> Result<()> { async fn child_event(&mut self, event: ChildEvent) -> Result<()> {
if event.pid == self.child { if event.pid == self.child {
self.idm self.idm
.sender .emit(Event {
.send(IdmPacket { event: Some(EventType::Exit(ExitEvent { code: event.status })),
event: Some(IdmEvent {
event: Some(Event::Exit(IdmExitEvent { code: event.status })),
}),
}) })
.await?; .await?;
death(event.status).await?; death(event.status).await?;

172
crates/guest/src/exec.rs Normal file
View File

@ -0,0 +1,172 @@
use std::{collections::HashMap, process::Stdio};
use anyhow::{anyhow, Result};
use krata::idm::{
client::IdmClientStreamResponseHandle,
internal::{
exec_stream_request_update::Update, request::Request as RequestType,
ExecStreamResponseUpdate,
},
internal::{response::Response as ResponseType, Request, Response},
};
use tokio::{
io::{AsyncReadExt, AsyncWriteExt},
join,
process::Command,
};
pub struct GuestExecTask {
pub handle: IdmClientStreamResponseHandle<Request>,
}
impl GuestExecTask {
pub async fn run(&self) -> Result<()> {
let mut receiver = self.handle.take().await?;
let Some(ref request) = self.handle.initial.request else {
return Err(anyhow!("request was empty"));
};
let RequestType::ExecStream(update) = request else {
return Err(anyhow!("request was not an exec update"));
};
let Some(Update::Start(ref start)) = update.update else {
return Err(anyhow!("first request did not contain a start update"));
};
let mut cmd = start.command.clone();
if cmd.is_empty() {
return Err(anyhow!("command line was empty"));
}
let exe = cmd.remove(0);
let mut env = HashMap::new();
for entry in &start.environment {
env.insert(entry.key.clone(), entry.value.clone());
}
if !env.contains_key("PATH") {
env.insert(
"PATH".to_string(),
"/bin:/usr/bin:/usr/local/bin".to_string(),
);
}
let dir = if start.working_directory.is_empty() {
"/".to_string()
} else {
start.working_directory.clone()
};
let mut child = Command::new(exe)
.args(cmd)
.envs(env)
.current_dir(dir)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.kill_on_drop(true)
.spawn()
.map_err(|error| anyhow!("failed to spawn: {}", error))?;
let mut stdin = child
.stdin
.take()
.ok_or_else(|| anyhow!("stdin was missing"))?;
let mut stdout = child
.stdout
.take()
.ok_or_else(|| anyhow!("stdout was missing"))?;
let mut stderr = child
.stderr
.take()
.ok_or_else(|| anyhow!("stderr was missing"))?;
let stdout_handle = self.handle.clone();
let stdout_task = tokio::task::spawn(async move {
let mut stdout_buffer = vec![0u8; 8 * 1024];
loop {
let Ok(size) = stdout.read(&mut stdout_buffer).await else {
break;
};
if size > 0 {
let response = Response {
response: Some(ResponseType::ExecStream(ExecStreamResponseUpdate {
exited: false,
exit_code: 0,
error: String::new(),
stdout: stdout_buffer[0..size].to_vec(),
stderr: vec![],
})),
};
let _ = stdout_handle.respond(response).await;
} else {
break;
}
}
});
let stderr_handle = self.handle.clone();
let stderr_task = tokio::task::spawn(async move {
let mut stderr_buffer = vec![0u8; 8 * 1024];
loop {
let Ok(size) = stderr.read(&mut stderr_buffer).await else {
break;
};
if size > 0 {
let response = Response {
response: Some(ResponseType::ExecStream(ExecStreamResponseUpdate {
exited: false,
exit_code: 0,
error: String::new(),
stdout: vec![],
stderr: stderr_buffer[0..size].to_vec(),
})),
};
let _ = stderr_handle.respond(response).await;
} else {
break;
}
}
});
let stdin_task = tokio::task::spawn(async move {
loop {
let Some(request) = receiver.recv().await else {
break;
};
let Some(RequestType::ExecStream(update)) = request.request else {
continue;
};
let Some(Update::Stdin(update)) = update.update else {
continue;
};
if stdin.write_all(&update.data).await.is_err() {
break;
}
}
});
let exit = child.wait().await?;
let code = exit.code().unwrap_or(-1);
let _ = join!(stdout_task, stderr_task);
stdin_task.abort();
let response = Response {
response: Some(ResponseType::ExecStream(ExecStreamResponseUpdate {
exited: true,
exit_code: code,
error: String::new(),
stdout: vec![],
stderr: vec![],
})),
};
self.handle.respond(response).await?;
Ok(())
}
}

View File

@ -3,8 +3,9 @@ use cgroups_rs::{Cgroup, CgroupPid};
use futures::stream::TryStreamExt; use futures::stream::TryStreamExt;
use ipnetwork::IpNetwork; use ipnetwork::IpNetwork;
use krata::ethtool::EthtoolHandle; use krata::ethtool::EthtoolHandle;
use krata::idm::client::IdmClient; use krata::idm::client::IdmInternalClient;
use krata::launchcfg::{LaunchInfo, LaunchNetwork}; use krata::idm::internal::INTERNAL_IDM_CHANNEL;
use krata::launchcfg::{LaunchInfo, LaunchNetwork, LaunchPackedFormat};
use libc::{sethostname, setsid, TIOCSCTTY}; use libc::{sethostname, setsid, TIOCSCTTY};
use log::{trace, warn}; use log::{trace, warn};
use nix::ioctl_write_int_bad; use nix::ioctl_write_int_bad;
@ -17,14 +18,12 @@ use std::fs::{File, OpenOptions, Permissions};
use std::io; use std::io;
use std::net::{Ipv4Addr, Ipv6Addr}; use std::net::{Ipv4Addr, Ipv6Addr};
use std::os::fd::AsRawFd; use std::os::fd::AsRawFd;
use std::os::linux::fs::MetadataExt;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::os::unix::fs::{chroot, PermissionsExt}; use std::os::unix::fs::{chroot, PermissionsExt};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::str::FromStr; use std::str::FromStr;
use sys_mount::{FilesystemType, Mount, MountFlags}; use sys_mount::{FilesystemType, Mount, MountFlags};
use tokio::fs; use tokio::fs;
use walkdir::WalkDir;
use crate::background::GuestBackground; use crate::background::GuestBackground;
@ -79,16 +78,17 @@ impl GuestInit {
Err(error) => warn!("failed to open console: {}", error), Err(error) => warn!("failed to open console: {}", error),
}; };
let idm = IdmClient::open("/dev/hvc1") let idm = IdmInternalClient::open(INTERNAL_IDM_CHANNEL, "/dev/hvc1")
.await .await
.map_err(|x| anyhow!("failed to open idm client: {}", x))?; .map_err(|x| anyhow!("failed to open idm client: {}", x))?;
self.mount_squashfs_images().await?; self.mount_config_image().await?;
let config = self.parse_image_config().await?; let config = self.parse_image_config().await?;
let launch = self.parse_launch_config().await?; let launch = self.parse_launch_config().await?;
self.mount_root_image(launch.root.format.clone()).await?;
self.mount_new_root().await?; self.mount_new_root().await?;
self.nuke_initrd().await?;
self.bind_new_root().await?; self.bind_new_root().await?;
if let Some(hostname) = launch.hostname.clone() { if let Some(hostname) = launch.hostname.clone() {
@ -101,11 +101,19 @@ impl GuestInit {
if result != 0 { if result != 0 {
warn!("failed to set hostname: {}", result); warn!("failed to set hostname: {}", result);
} }
let etc = PathBuf::from_str("/etc")?;
if !etc.exists() {
fs::create_dir(&etc).await?;
}
let mut etc_hostname = etc;
etc_hostname.push("hostname");
fs::write(&etc_hostname, hostname + "\n").await?;
} }
if let Some(network) = &launch.network { if let Some(network) = &launch.network {
trace!("initializing network"); trace!("initializing network");
if let Err(error) = self.network_setup(network).await { if let Err(error) = self.network_setup(&launch, network).await {
warn!("failed to initialize network: {}", error); warn!("failed to initialize network: {}", error);
} }
} }
@ -180,24 +188,41 @@ impl GuestInit {
Ok(()) Ok(())
} }
async fn mount_squashfs_images(&mut self) -> Result<()> { async fn mount_config_image(&mut self) -> Result<()> {
trace!("mounting squashfs images"); trace!("mounting config image");
let image_mount_path = Path::new(IMAGE_MOUNT_PATH);
let config_mount_path = Path::new(CONFIG_MOUNT_PATH); let config_mount_path = Path::new(CONFIG_MOUNT_PATH);
self.mount_squashfs(Path::new(IMAGE_BLOCK_DEVICE_PATH), image_mount_path) self.mount_image(
.await?; Path::new(CONFIG_BLOCK_DEVICE_PATH),
self.mount_squashfs(Path::new(CONFIG_BLOCK_DEVICE_PATH), config_mount_path) config_mount_path,
LaunchPackedFormat::Squashfs,
)
.await?;
Ok(())
}
async fn mount_root_image(&mut self, format: LaunchPackedFormat) -> Result<()> {
trace!("mounting root image");
let image_mount_path = Path::new(IMAGE_MOUNT_PATH);
self.mount_image(Path::new(IMAGE_BLOCK_DEVICE_PATH), image_mount_path, format)
.await?; .await?;
Ok(()) Ok(())
} }
async fn mount_squashfs(&mut self, from: &Path, to: &Path) -> Result<()> { async fn mount_image(
trace!("mounting squashfs image {:?} to {:?}", from, to); &mut self,
from: &Path,
to: &Path,
format: LaunchPackedFormat,
) -> Result<()> {
trace!("mounting {:?} image {:?} to {:?}", format, from, to);
if !to.is_dir() { if !to.is_dir() {
fs::create_dir(to).await?; fs::create_dir(to).await?;
} }
Mount::builder() Mount::builder()
.fstype(FilesystemType::Manual("squashfs")) .fstype(FilesystemType::Manual(match format {
LaunchPackedFormat::Squashfs => "squashfs",
LaunchPackedFormat::Erofs => "erofs",
}))
.flags(MountFlags::RDONLY) .flags(MountFlags::RDONLY)
.mount(from, to)?; .mount(from, to)?;
Ok(()) Ok(())
@ -271,40 +296,6 @@ impl GuestInit {
Ok(serde_json::from_str(&content)?) Ok(serde_json::from_str(&content)?)
} }
async fn nuke_initrd(&mut self) -> Result<()> {
trace!("nuking initrd");
let initrd_dev = fs::metadata("/").await?.st_dev();
for item in WalkDir::new("/")
.same_file_system(true)
.follow_links(false)
.contents_first(true)
{
if item.is_err() {
continue;
}
let item = item?;
let metadata = match item.metadata() {
Ok(value) => value,
Err(_) => continue,
};
if metadata.st_dev() != initrd_dev {
continue;
}
if metadata.is_symlink() || metadata.is_file() {
let _ = fs::remove_file(item.path()).await;
trace!("deleting file {:?}", item.path());
} else if metadata.is_dir() {
let _ = fs::remove_dir(item.path()).await;
trace!("deleting directory {:?}", item.path());
}
}
trace!("nuked initrd");
Ok(())
}
async fn bind_new_root(&mut self) -> Result<()> { async fn bind_new_root(&mut self) -> Result<()> {
self.mount_move_subtree(Path::new(SYS_PATH), Path::new(NEW_ROOT_SYS_PATH)) self.mount_move_subtree(Path::new(SYS_PATH), Path::new(NEW_ROOT_SYS_PATH))
.await?; .await?;
@ -324,7 +315,7 @@ impl GuestInit {
Ok(()) Ok(())
} }
async fn network_setup(&mut self, network: &LaunchNetwork) -> Result<()> { async fn network_setup(&mut self, cfg: &LaunchInfo, network: &LaunchNetwork) -> Result<()> {
trace!("setting up network for link"); trace!("setting up network for link");
let etc = PathBuf::from_str("/etc")?; let etc = PathBuf::from_str("/etc")?;
@ -332,14 +323,33 @@ impl GuestInit {
fs::create_dir(etc).await?; fs::create_dir(etc).await?;
} }
let resolv = PathBuf::from_str("/etc/resolv.conf")?; let resolv = PathBuf::from_str("/etc/resolv.conf")?;
let mut lines = vec!["# krata resolver configuration".to_string()];
for nameserver in &network.resolver.nameservers { {
lines.push(format!("nameserver {}", nameserver)); let mut lines = vec!["# krata resolver configuration".to_string()];
for nameserver in &network.resolver.nameservers {
lines.push(format!("nameserver {}", nameserver));
}
let mut conf = lines.join("\n");
conf.push('\n');
fs::write(resolv, conf).await?;
}
let hosts = PathBuf::from_str("/etc/hosts")?;
if let Some(ref hostname) = cfg.hostname {
let mut lines = if hosts.exists() {
fs::read_to_string(&hosts)
.await?
.lines()
.map(|x| x.to_string())
.collect::<Vec<_>>()
} else {
vec!["127.0.0.1 localhost".to_string()]
};
lines.push(format!("127.0.1.1 {}", hostname));
fs::write(&hosts, lines.join("\n") + "\n").await?;
} }
let mut conf = lines.join("\n");
conf.push('\n');
fs::write(resolv, conf).await?;
self.network_configure_ethtool(network).await?; self.network_configure_ethtool(network).await?;
self.network_configure_link(network).await?; self.network_configure_link(network).await?;
Ok(()) Ok(())
@ -429,7 +439,12 @@ impl GuestInit {
Ok(()) Ok(())
} }
async fn run(&mut self, config: &Config, launch: &LaunchInfo, idm: IdmClient) -> Result<()> { async fn run(
&mut self,
config: &Config,
launch: &LaunchInfo,
idm: IdmInternalClient,
) -> Result<()> {
let mut cmd = match config.cmd() { let mut cmd = match config.cmd() {
None => vec![], None => vec![],
Some(value) => value.clone(), Some(value) => value.clone(),
@ -457,9 +472,14 @@ impl GuestInit {
} }
env.extend(launch.env.clone()); env.extend(launch.env.clone());
env.insert("KRATA_CONTAINER".to_string(), "1".to_string()); env.insert("KRATA_CONTAINER".to_string(), "1".to_string());
env.insert("TERM".to_string(), "vt100".to_string());
let path = GuestInit::resolve_executable(&env, path.into())?; // If we were not provided a terminal definition in our launch manifest, we
// default to xterm as most terminal emulators support the xterm control codes.
if !env.contains_key("TERM") {
env.insert("TERM".to_string(), "xterm".to_string());
}
let path = resolve_executable(&env, path.into())?;
let Some(file_name) = path.file_name() else { let Some(file_name) = path.file_name() else {
return Err(anyhow!("cannot get file name of command path")); return Err(anyhow!("cannot get file name of command path"));
}; };
@ -517,27 +537,6 @@ impl GuestInit {
map map
} }
fn resolve_executable(env: &HashMap<String, String>, path: PathBuf) -> Result<PathBuf> {
if path.is_absolute() {
return Ok(path);
}
if path.is_file() {
return Ok(path.absolutize()?.to_path_buf());
}
if let Some(path_var) = env.get("PATH") {
for item in path_var.split(':') {
let mut exe_path: PathBuf = item.into();
exe_path.push(&path);
if exe_path.is_file() {
return Ok(exe_path);
}
}
}
Ok(path)
}
fn env_list(env: HashMap<String, String>) -> Vec<String> { fn env_list(env: HashMap<String, String>) -> Vec<String> {
env.iter() env.iter()
.map(|(key, value)| format!("{}={}", key, value)) .map(|(key, value)| format!("{}={}", key, value))
@ -546,7 +545,7 @@ impl GuestInit {
async fn fork_and_exec( async fn fork_and_exec(
&mut self, &mut self,
idm: IdmClient, idm: IdmInternalClient,
cgroup: Cgroup, cgroup: Cgroup,
working_dir: String, working_dir: String,
path: CString, path: CString,
@ -582,9 +581,35 @@ impl GuestInit {
Ok(()) Ok(())
} }
async fn background(&mut self, idm: IdmClient, cgroup: Cgroup, executed: Pid) -> Result<()> { async fn background(
&mut self,
idm: IdmInternalClient,
cgroup: Cgroup,
executed: Pid,
) -> Result<()> {
let mut background = GuestBackground::new(idm, cgroup, executed).await?; let mut background = GuestBackground::new(idm, cgroup, executed).await?;
background.run().await?; background.run().await?;
Ok(()) Ok(())
} }
} }
pub fn resolve_executable(env: &HashMap<String, String>, path: PathBuf) -> Result<PathBuf> {
if path.is_absolute() {
return Ok(path);
}
if path.is_file() {
return Ok(path.absolutize()?.to_path_buf());
}
if let Some(path_var) = env.get("PATH") {
for item in path_var.split(':') {
let mut exe_path: PathBuf = item.into();
exe_path.push(&path);
if exe_path.is_file() {
return Ok(exe_path);
}
}
}
Ok(path)
}

View File

@ -6,7 +6,9 @@ use xenstore::{XsdClient, XsdInterface};
pub mod background; pub mod background;
pub mod childwait; pub mod childwait;
pub mod exec;
pub mod init; pub mod init;
pub mod metrics;
pub async fn death(code: c_int) -> Result<()> { pub async fn death(code: c_int) -> Result<()> {
let store = XsdClient::open().await?; let store = XsdClient::open().await?;

118
crates/guest/src/metrics.rs Normal file
View File

@ -0,0 +1,118 @@
use std::{ops::Add, path::Path};
use anyhow::Result;
use krata::idm::internal::{MetricFormat, MetricNode};
use sysinfo::Process;
pub struct MetricsCollector {}
impl MetricsCollector {
pub fn new() -> Result<Self> {
Ok(MetricsCollector {})
}
pub fn collect(&self) -> Result<MetricNode> {
let mut sysinfo = sysinfo::System::new();
Ok(MetricNode::structural(
"guest",
vec![
self.collect_system(&mut sysinfo)?,
self.collect_processes(&mut sysinfo)?,
],
))
}
fn collect_system(&self, sysinfo: &mut sysinfo::System) -> Result<MetricNode> {
sysinfo.refresh_memory();
Ok(MetricNode::structural(
"system",
vec![MetricNode::structural(
"memory",
vec![
MetricNode::value("total", sysinfo.total_memory(), MetricFormat::Bytes),
MetricNode::value("used", sysinfo.used_memory(), MetricFormat::Bytes),
MetricNode::value("free", sysinfo.free_memory(), MetricFormat::Bytes),
],
)],
))
}
fn collect_processes(&self, sysinfo: &mut sysinfo::System) -> Result<MetricNode> {
sysinfo.refresh_processes();
let mut processes = Vec::new();
let mut sysinfo_processes = sysinfo.processes().values().collect::<Vec<_>>();
sysinfo_processes.sort_by_key(|x| x.pid());
for process in sysinfo_processes {
if process.thread_kind().is_some() {
continue;
}
processes.push(MetricsCollector::process_node(process)?);
}
Ok(MetricNode::structural("process", processes))
}
fn process_node(process: &Process) -> Result<MetricNode> {
let mut metrics = vec![];
if let Some(parent) = process.parent() {
metrics.push(MetricNode::value(
"parent",
parent.as_u32() as u64,
MetricFormat::Integer,
));
}
if let Some(exe) = process.exe().and_then(path_as_str) {
metrics.push(MetricNode::raw_value("executable", exe));
}
if let Some(working_directory) = process.cwd().and_then(path_as_str) {
metrics.push(MetricNode::raw_value("cwd", working_directory));
}
let cmdline = process.cmd().to_vec();
metrics.push(MetricNode::raw_value("cmdline", cmdline));
metrics.push(MetricNode::structural(
"memory",
vec![
MetricNode::value("resident", process.memory(), MetricFormat::Bytes),
MetricNode::value("virtual", process.virtual_memory(), MetricFormat::Bytes),
],
));
metrics.push(MetricNode::value(
"lifetime",
process.run_time(),
MetricFormat::DurationSeconds,
));
metrics.push(MetricNode::value(
"uid",
process.user_id().map(|x| (*x).add(0)).unwrap_or(0) as f64,
MetricFormat::Integer,
));
metrics.push(MetricNode::value(
"gid",
process.group_id().map(|x| (*x).add(0)).unwrap_or(0) as f64,
MetricFormat::Integer,
));
metrics.push(MetricNode::value(
"euid",
process
.effective_user_id()
.map(|x| (*x).add(0))
.unwrap_or(0) as f64,
MetricFormat::Integer,
));
metrics.push(MetricNode::value(
"egid",
process.effective_group_id().map(|x| x.add(0)).unwrap_or(0) as f64,
MetricFormat::Integer,
));
Ok(MetricNode::structural(process.pid().to_string(), metrics))
}
}
fn path_as_str(path: &Path) -> Option<String> {
String::from_utf8(path.as_os_str().as_encoded_bytes().to_vec()).ok()
}

View File

@ -10,12 +10,15 @@ resolver = "2"
[dependencies] [dependencies]
anyhow = { workspace = true } anyhow = { workspace = true }
async-trait = { workspace = true }
bytes = { workspace = true } bytes = { workspace = true }
libc = { workspace = true } libc = { workspace = true }
log = { workspace = true } log = { workspace = true }
once_cell = { workspace = true } once_cell = { workspace = true }
prost = { workspace = true } prost = { workspace = true }
prost-reflect = { workspace = true } prost-reflect = { workspace = true }
prost-types = { workspace = true }
scopeguard = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
tonic = { workspace = true } tonic = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }

View File

@ -8,7 +8,8 @@ fn main() -> Result<()> {
&mut config, &mut config,
&[ &[
"proto/krata/v1/control.proto", "proto/krata/v1/control.proto",
"proto/krata/internal/idm.proto", "proto/krata/idm/transport.proto",
"proto/krata/idm/internal.proto",
], ],
&["proto/"], &["proto/"],
)?; )?;
@ -16,7 +17,8 @@ fn main() -> Result<()> {
config, config,
&[ &[
"proto/krata/v1/control.proto", "proto/krata/v1/control.proto",
"proto/krata/internal/idm.proto", "proto/krata/idm/transport.proto",
"proto/krata/idm/internal.proto",
], ],
&["proto/"], &["proto/"],
)?; )?;

View File

@ -0,0 +1,89 @@
syntax = "proto3";
package krata.idm.internal;
option java_multiple_files = true;
option java_package = "dev.krata.proto.idm.internal";
option java_outer_classname = "IdmInternalProto";
import "google/protobuf/struct.proto";
message ExitEvent {
int32 code = 1;
}
message PingRequest {}
message PingResponse {}
message MetricsRequest {}
message MetricsResponse {
MetricNode root = 1;
}
message MetricNode {
string name = 1;
google.protobuf.Value value = 2;
MetricFormat format = 3;
repeated MetricNode children = 4;
}
enum MetricFormat {
METRIC_FORMAT_UNKNOWN = 0;
METRIC_FORMAT_BYTES = 1;
METRIC_FORMAT_INTEGER = 2;
METRIC_FORMAT_DURATION_SECONDS = 3;
}
message ExecEnvVar {
string key = 1;
string value = 2;
}
message ExecStreamRequestStart {
repeated ExecEnvVar environment = 1;
repeated string command = 2;
string working_directory = 3;
}
message ExecStreamRequestStdin {
bytes data = 1;
}
message ExecStreamRequestUpdate {
oneof update {
ExecStreamRequestStart start = 1;
ExecStreamRequestStdin stdin = 2;
}
}
message ExecStreamResponseUpdate {
bool exited = 1;
string error = 2;
int32 exit_code = 3;
bytes stdout = 4;
bytes stderr = 5;
}
message Event {
oneof event {
ExitEvent exit = 1;
}
}
message Request {
oneof request {
PingRequest ping = 1;
MetricsRequest metrics = 2;
ExecStreamRequestUpdate exec_stream = 3;
}
}
message Response {
oneof response {
PingResponse ping = 1;
MetricsResponse metrics = 2;
ExecStreamResponseUpdate exec_stream = 3;
}
}

View File

@ -0,0 +1,27 @@
syntax = "proto3";
package krata.idm.transport;
option java_multiple_files = true;
option java_package = "dev.krata.proto.idm.transport";
option java_outer_classname = "IdmTransportProto";
message IdmTransportPacket {
uint64 id = 1;
uint64 channel = 2;
IdmTransportPacketForm form = 3;
bytes data = 4;
}
enum IdmTransportPacketForm {
IDM_TRANSPORT_PACKET_FORM_UNKNOWN = 0;
IDM_TRANSPORT_PACKET_FORM_RAW = 1;
IDM_TRANSPORT_PACKET_FORM_EVENT = 2;
IDM_TRANSPORT_PACKET_FORM_REQUEST = 3;
IDM_TRANSPORT_PACKET_FORM_RESPONSE = 4;
IDM_TRANSPORT_PACKET_FORM_STREAM_REQUEST = 5;
IDM_TRANSPORT_PACKET_FORM_STREAM_REQUEST_UPDATE = 6;
IDM_TRANSPORT_PACKET_FORM_STREAM_RESPONSE_UPDATE = 7;
IDM_TRANSPORT_PACKET_FORM_STREAM_REQUEST_CLOSED = 8;
IDM_TRANSPORT_PACKET_FORM_STREAM_RESPONSE_CLOSED = 9;
}

View File

@ -1,21 +0,0 @@
syntax = "proto3";
package krata.internal.idm;
option java_multiple_files = true;
option java_package = "dev.krata.proto.internal.idm";
option java_outer_classname = "IdmProto";
message IdmExitEvent {
int32 code = 1;
}
message IdmEvent {
oneof event {
IdmExitEvent exit = 1;
}
}
message IdmPacket {
IdmEvent event = 1;
}

View File

@ -6,6 +6,8 @@ option java_multiple_files = true;
option java_package = "dev.krata.proto.v1.common"; option java_package = "dev.krata.proto.v1.common";
option java_outer_classname = "CommonProto"; option java_outer_classname = "CommonProto";
import "google/protobuf/struct.proto";
message Guest { message Guest {
string id = 1; string id = 1;
GuestSpec spec = 2; GuestSpec spec = 2;
@ -15,10 +17,14 @@ message Guest {
message GuestSpec { message GuestSpec {
string name = 1; string name = 1;
GuestImageSpec image = 2; GuestImageSpec image = 2;
uint32 vcpus = 3; // If not specified, defaults to the daemon default kernel.
uint64 mem = 4; GuestImageSpec kernel = 3;
GuestTaskSpec task = 5; // If not specified, defaults to the daemon default initrd.
repeated GuestSpecAnnotation annotations = 6; GuestImageSpec initrd = 4;
uint32 vcpus = 5;
uint64 mem = 6;
GuestTaskSpec task = 7;
repeated GuestSpecAnnotation annotations = 8;
} }
message GuestImageSpec { message GuestImageSpec {
@ -27,13 +33,23 @@ message GuestImageSpec {
} }
} }
enum OciImageFormat {
OCI_IMAGE_FORMAT_UNKNOWN = 0;
OCI_IMAGE_FORMAT_SQUASHFS = 1;
OCI_IMAGE_FORMAT_EROFS = 2;
// Tar format is not launchable, and is intended for kernel images.
OCI_IMAGE_FORMAT_TAR = 3;
}
message GuestOciImageSpec { message GuestOciImageSpec {
string image = 1; string digest = 1;
OciImageFormat format = 2;
} }
message GuestTaskSpec { message GuestTaskSpec {
repeated GuestTaskSpecEnvVar environment = 1; repeated GuestTaskSpecEnvVar environment = 1;
repeated string command = 2; repeated string command = 2;
string working_directory = 3;
} }
message GuestTaskSpecEnvVar { message GuestTaskSpecEnvVar {
@ -51,7 +67,8 @@ message GuestState {
GuestNetworkState network = 2; GuestNetworkState network = 2;
GuestExitInfo exit_info = 3; GuestExitInfo exit_info = 3;
GuestErrorInfo error_info = 4; GuestErrorInfo error_info = 4;
uint32 domid = 5; string host = 5;
uint32 domid = 6;
} }
enum GuestStatus { enum GuestStatus {
@ -80,3 +97,17 @@ message GuestExitInfo {
message GuestErrorInfo { message GuestErrorInfo {
string message = 1; string message = 1;
} }
message GuestMetricNode {
string name = 1;
google.protobuf.Value value = 2;
GuestMetricFormat format = 3;
repeated GuestMetricNode children = 4;
}
enum GuestMetricFormat {
GUEST_METRIC_FORMAT_UNKNOWN = 0;
GUEST_METRIC_FORMAT_BYTES = 1;
GUEST_METRIC_FORMAT_INTEGER = 2;
GUEST_METRIC_FORMAT_DURATION_SECONDS = 3;
}

View File

@ -6,15 +6,34 @@ option java_multiple_files = true;
option java_package = "dev.krata.proto.v1.control"; option java_package = "dev.krata.proto.v1.control";
option java_outer_classname = "ControlProto"; option java_outer_classname = "ControlProto";
import "krata/idm/transport.proto";
import "krata/v1/common.proto"; import "krata/v1/common.proto";
service ControlService { service ControlService {
rpc IdentifyHost(IdentifyHostRequest) returns (IdentifyHostReply);
rpc CreateGuest(CreateGuestRequest) returns (CreateGuestReply); rpc CreateGuest(CreateGuestRequest) returns (CreateGuestReply);
rpc DestroyGuest(DestroyGuestRequest) returns (DestroyGuestReply); rpc DestroyGuest(DestroyGuestRequest) returns (DestroyGuestReply);
rpc ResolveGuest(ResolveGuestRequest) returns (ResolveGuestReply); rpc ResolveGuest(ResolveGuestRequest) returns (ResolveGuestReply);
rpc ListGuests(ListGuestsRequest) returns (ListGuestsReply); rpc ListGuests(ListGuestsRequest) returns (ListGuestsReply);
rpc ExecGuest(stream ExecGuestRequest) returns (stream ExecGuestReply);
rpc ConsoleData(stream ConsoleDataRequest) returns (stream ConsoleDataReply); rpc ConsoleData(stream ConsoleDataRequest) returns (stream ConsoleDataReply);
rpc ReadGuestMetrics(ReadGuestMetricsRequest) returns (ReadGuestMetricsReply);
rpc SnoopIdm(SnoopIdmRequest) returns (stream SnoopIdmReply);
rpc WatchEvents(WatchEventsRequest) returns (stream WatchEventsReply); rpc WatchEvents(WatchEventsRequest) returns (stream WatchEventsReply);
rpc PullImage(PullImageRequest) returns (stream PullImageReply);
}
message IdentifyHostRequest {}
message IdentifyHostReply {
string host_uuid = 1;
uint32 host_domid = 2;
string krata_version = 3;
} }
message CreateGuestRequest { message CreateGuestRequest {
@ -45,6 +64,20 @@ message ListGuestsReply {
repeated krata.v1.common.Guest guests = 1; repeated krata.v1.common.Guest guests = 1;
} }
message ExecGuestRequest {
string guest_id = 1;
krata.v1.common.GuestTaskSpec task = 2;
bytes data = 3;
}
message ExecGuestReply {
bool exited = 1;
string error = 2;
int32 exit_code = 3;
bytes stdout = 4;
bytes stderr = 5;
}
message ConsoleDataRequest { message ConsoleDataRequest {
string guest_id = 1; string guest_id = 1;
bytes data = 2; bytes data = 2;
@ -65,3 +98,92 @@ message WatchEventsReply {
message GuestChangedEvent { message GuestChangedEvent {
krata.v1.common.Guest guest = 1; krata.v1.common.Guest guest = 1;
} }
message ReadGuestMetricsRequest {
string guest_id = 1;
}
message ReadGuestMetricsReply {
krata.v1.common.GuestMetricNode root = 1;
}
message SnoopIdmRequest {}
message SnoopIdmReply {
string from = 1;
string to = 2;
krata.idm.transport.IdmTransportPacket packet = 3;
}
message ImageProgress {
ImageProgressPhase phase = 1;
repeated ImageProgressLayer layers = 2;
ImageProgressIndication indication = 3;
}
enum ImageProgressPhase {
IMAGE_PROGRESS_PHASE_UNKNOWN = 0;
IMAGE_PROGRESS_PHASE_STARTED = 1;
IMAGE_PROGRESS_PHASE_RESOLVING = 2;
IMAGE_PROGRESS_PHASE_RESOLVED = 3;
IMAGE_PROGRESS_PHASE_CONFIG_DOWNLOAD = 4;
IMAGE_PROGRESS_PHASE_LAYER_DOWNLOAD = 5;
IMAGE_PROGRESS_PHASE_ASSEMBLE = 6;
IMAGE_PROGRESS_PHASE_PACK = 7;
IMAGE_PROGRESS_PHASE_COMPLETE = 8;
}
message ImageProgressLayer {
string id = 1;
ImageProgressLayerPhase phase = 2;
ImageProgressIndication indication = 3;
}
enum ImageProgressLayerPhase {
IMAGE_PROGRESS_LAYER_PHASE_UNKNOWN = 0;
IMAGE_PROGRESS_LAYER_PHASE_WAITING = 1;
IMAGE_PROGRESS_LAYER_PHASE_DOWNLOADING = 2;
IMAGE_PROGRESS_LAYER_PHASE_DOWNLOADED = 3;
IMAGE_PROGRESS_LAYER_PHASE_EXTRACTING = 4;
IMAGE_PROGRESS_LAYER_PHASE_EXTRACTED = 5;
}
message ImageProgressIndication {
oneof indication {
ImageProgressIndicationBar bar = 1;
ImageProgressIndicationSpinner spinner = 2;
ImageProgressIndicationHidden hidden = 3;
ImageProgressIndicationCompleted completed = 4;
}
}
message ImageProgressIndicationBar {
string message = 1;
uint64 current = 2;
uint64 total = 3;
bool is_bytes = 4;
}
message ImageProgressIndicationSpinner {
string message = 1;
}
message ImageProgressIndicationHidden {}
message ImageProgressIndicationCompleted {
string message = 1;
uint64 total = 2;
bool is_bytes = 3;
}
message PullImageRequest {
string image = 1;
krata.v1.common.OciImageFormat format = 2;
bool overwrite_cache = 3;
}
message PullImageReply {
ImageProgress progress = 1;
string digest = 2;
krata.v1.common.OciImageFormat format = 3;
}

View File

@ -1,8 +1,14 @@
use std::path::Path; use std::{
collections::HashMap,
path::Path,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
time::Duration,
};
use super::protocol::IdmPacket;
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Result};
use bytes::BytesMut;
use log::{debug, error}; use log::{debug, error};
use nix::sys::termios::{cfmakeraw, tcgetattr, tcsetattr, SetArg}; use nix::sys::termios::{cfmakeraw, tcgetattr, tcsetattr, SetArg};
use prost::Message; use prost::Message;
@ -10,44 +16,48 @@ use tokio::{
fs::File, fs::File,
io::{unix::AsyncFd, AsyncReadExt, AsyncWriteExt}, io::{unix::AsyncFd, AsyncReadExt, AsyncWriteExt},
select, select,
sync::mpsc::{channel, Receiver, Sender}, sync::{
broadcast,
mpsc::{self, Receiver, Sender},
oneshot, Mutex,
},
task::JoinHandle, task::JoinHandle,
time::timeout,
}; };
use super::{
internal,
serialize::{IdmRequest, IdmSerializable},
transport::{IdmTransportPacket, IdmTransportPacketForm},
};
type OneshotRequestMap<R> = Arc<Mutex<HashMap<u64, oneshot::Sender<<R as IdmRequest>::Response>>>>;
type StreamRequestMap<R> = Arc<Mutex<HashMap<u64, Sender<<R as IdmRequest>::Response>>>>;
type StreamRequestUpdateMap<R> = Arc<Mutex<HashMap<u64, mpsc::Sender<R>>>>;
pub type IdmInternalClient = IdmClient<internal::Request, internal::Event>;
const IDM_PACKET_QUEUE_LEN: usize = 100; const IDM_PACKET_QUEUE_LEN: usize = 100;
const IDM_REQUEST_TIMEOUT_SECS: u64 = 10;
const IDM_PACKET_MAX_SIZE: usize = 20 * 1024 * 1024;
pub struct IdmClient { #[async_trait::async_trait]
pub receiver: Receiver<IdmPacket>, pub trait IdmBackend: Send {
pub sender: Sender<IdmPacket>, async fn recv(&mut self) -> Result<IdmTransportPacket>;
task: JoinHandle<()>, async fn send(&mut self, packet: IdmTransportPacket) -> Result<()>;
} }
impl Drop for IdmClient { pub struct IdmFileBackend {
fn drop(&mut self) { read_fd: Arc<Mutex<AsyncFd<File>>>,
self.task.abort(); write: Arc<Mutex<File>>,
}
} }
impl IdmClient { impl IdmFileBackend {
pub async fn open<P: AsRef<Path>>(path: P) -> Result<IdmClient> { pub async fn new(read_file: File, write_file: File) -> Result<IdmFileBackend> {
let file = File::options() IdmFileBackend::set_raw_port(&read_file)?;
.read(true) IdmFileBackend::set_raw_port(&write_file)?;
.write(true) Ok(IdmFileBackend {
.create(false) read_fd: Arc::new(Mutex::new(AsyncFd::new(read_file)?)),
.open(path) write: Arc::new(Mutex::new(write_file)),
.await?;
IdmClient::set_raw_port(&file)?;
let (rx_sender, rx_receiver) = channel(IDM_PACKET_QUEUE_LEN);
let (tx_sender, tx_receiver) = channel(IDM_PACKET_QUEUE_LEN);
let task = tokio::task::spawn(async move {
if let Err(error) = IdmClient::process(file, rx_sender, tx_receiver).await {
debug!("failed to handle idm client processing: {}", error);
}
});
Ok(IdmClient {
receiver: rx_receiver,
sender: tx_sender,
task,
}) })
} }
@ -57,31 +67,413 @@ impl IdmClient {
tcsetattr(file, SetArg::TCSANOW, &termios)?; tcsetattr(file, SetArg::TCSANOW, &termios)?;
Ok(()) Ok(())
} }
}
#[async_trait::async_trait]
impl IdmBackend for IdmFileBackend {
async fn recv(&mut self) -> Result<IdmTransportPacket> {
let mut fd = self.read_fd.lock().await;
let mut guard = fd.readable_mut().await?;
let b1 = guard.get_inner_mut().read_u8().await?;
if b1 != 0xff {
return Ok(IdmTransportPacket::default());
}
let b2 = guard.get_inner_mut().read_u8().await?;
if b2 != 0xff {
return Ok(IdmTransportPacket::default());
}
let size = guard.get_inner_mut().read_u32_le().await?;
if size == 0 {
return Ok(IdmTransportPacket::default());
}
let mut buffer = vec![0u8; size as usize];
guard.get_inner_mut().read_exact(&mut buffer).await?;
match IdmTransportPacket::decode(buffer.as_slice()) {
Ok(packet) => Ok(packet),
Err(error) => Err(anyhow!("received invalid idm packet: {}", error)),
}
}
async fn send(&mut self, packet: IdmTransportPacket) -> Result<()> {
let mut file = self.write.lock().await;
let data = packet.encode_to_vec();
file.write_all(&[0xff, 0xff]).await?;
file.write_u32_le(data.len() as u32).await?;
file.write_all(&data).await?;
Ok(())
}
}
#[derive(Clone)]
pub struct IdmClient<R: IdmRequest, E: IdmSerializable> {
channel: u64,
request_backend_sender: broadcast::Sender<(u64, R)>,
request_stream_backend_sender: broadcast::Sender<IdmClientStreamResponseHandle<R>>,
next_request_id: Arc<Mutex<u64>>,
event_receiver_sender: broadcast::Sender<E>,
tx_sender: Sender<IdmTransportPacket>,
requests: OneshotRequestMap<R>,
request_streams: StreamRequestMap<R>,
task: Arc<JoinHandle<()>>,
}
impl<R: IdmRequest, E: IdmSerializable> Drop for IdmClient<R, E> {
fn drop(&mut self) {
if Arc::strong_count(&self.task) <= 1 {
self.task.abort();
}
}
}
pub struct IdmClientStreamRequestHandle<R: IdmRequest, E: IdmSerializable> {
pub id: u64,
pub receiver: Receiver<R::Response>,
pub client: IdmClient<R, E>,
}
impl<R: IdmRequest, E: IdmSerializable> IdmClientStreamRequestHandle<R, E> {
pub async fn update(&self, request: R) -> Result<()> {
self.client
.tx_sender
.send(IdmTransportPacket {
id: self.id,
channel: self.client.channel,
form: IdmTransportPacketForm::StreamRequestUpdate.into(),
data: request.encode()?,
})
.await?;
Ok(())
}
}
impl<R: IdmRequest, E: IdmSerializable> Drop for IdmClientStreamRequestHandle<R, E> {
fn drop(&mut self) {
let id = self.id;
let client = self.client.clone();
tokio::task::spawn(async move {
let _ = client
.tx_sender
.send(IdmTransportPacket {
id,
channel: client.channel,
form: IdmTransportPacketForm::StreamRequestClosed.into(),
data: vec![],
})
.await;
});
}
}
#[derive(Clone)]
pub struct IdmClientStreamResponseHandle<R: IdmRequest> {
pub initial: R,
pub id: u64,
channel: u64,
tx_sender: Sender<IdmTransportPacket>,
receiver: Arc<Mutex<Option<Receiver<R>>>>,
}
impl<R: IdmRequest> IdmClientStreamResponseHandle<R> {
pub async fn respond(&self, response: R::Response) -> Result<()> {
self.tx_sender
.send(IdmTransportPacket {
id: self.id,
channel: self.channel,
form: IdmTransportPacketForm::StreamResponseUpdate.into(),
data: response.encode()?,
})
.await?;
Ok(())
}
pub async fn take(&self) -> Result<Receiver<R>> {
let mut guard = self.receiver.lock().await;
let Some(receiver) = (*guard).take() else {
return Err(anyhow!("request has already been claimed!"));
};
Ok(receiver)
}
}
impl<R: IdmRequest> Drop for IdmClientStreamResponseHandle<R> {
fn drop(&mut self) {
if Arc::strong_count(&self.receiver) <= 1 {
let id = self.id;
let channel = self.channel;
let tx_sender = self.tx_sender.clone();
tokio::task::spawn(async move {
let _ = tx_sender
.send(IdmTransportPacket {
id,
channel,
form: IdmTransportPacketForm::StreamResponseClosed.into(),
data: vec![],
})
.await;
});
}
}
}
impl<R: IdmRequest, E: IdmSerializable> IdmClient<R, E> {
pub async fn new(channel: u64, backend: Box<dyn IdmBackend>) -> Result<Self> {
let requests = Arc::new(Mutex::new(HashMap::new()));
let request_streams = Arc::new(Mutex::new(HashMap::new()));
let request_update_streams = Arc::new(Mutex::new(HashMap::new()));
let (event_sender, event_receiver) = broadcast::channel(IDM_PACKET_QUEUE_LEN);
let (internal_request_backend_sender, _) = broadcast::channel(IDM_PACKET_QUEUE_LEN);
let (internal_request_stream_backend_sender, _) = broadcast::channel(IDM_PACKET_QUEUE_LEN);
let (tx_sender, tx_receiver) = mpsc::channel(IDM_PACKET_QUEUE_LEN);
let backend_event_sender = event_sender.clone();
let request_backend_sender = internal_request_backend_sender.clone();
let request_stream_backend_sender = internal_request_stream_backend_sender.clone();
let requests_for_client = requests.clone();
let request_streams_for_client = request_streams.clone();
let tx_sender_for_client = tx_sender.clone();
let task = tokio::task::spawn(async move {
if let Err(error) = IdmClient::process(
backend,
channel,
tx_sender,
backend_event_sender,
requests,
request_streams,
request_update_streams,
internal_request_backend_sender,
internal_request_stream_backend_sender,
event_receiver,
tx_receiver,
)
.await
{
debug!("failed to handle idm client processing: {}", error);
}
});
Ok(IdmClient {
channel,
next_request_id: Arc::new(Mutex::new(0)),
event_receiver_sender: event_sender.clone(),
request_backend_sender,
request_stream_backend_sender,
requests: requests_for_client,
request_streams: request_streams_for_client,
tx_sender: tx_sender_for_client,
task: Arc::new(task),
})
}
pub async fn open<P: AsRef<Path>>(channel: u64, path: P) -> Result<Self> {
let read_file = File::options()
.read(true)
.write(false)
.create(false)
.open(&path)
.await?;
let write_file = File::options()
.read(false)
.write(true)
.create(false)
.open(path)
.await?;
let backend = IdmFileBackend::new(read_file, write_file).await?;
IdmClient::new(channel, Box::new(backend) as Box<dyn IdmBackend>).await
}
pub async fn emit<T: IdmSerializable>(&self, event: T) -> Result<()> {
let id = {
let mut guard = self.next_request_id.lock().await;
let req = *guard;
*guard = req.wrapping_add(1);
req
};
self.tx_sender
.send(IdmTransportPacket {
id,
form: IdmTransportPacketForm::Event.into(),
channel: self.channel,
data: event.encode()?,
})
.await?;
Ok(())
}
pub async fn requests(&self) -> Result<broadcast::Receiver<(u64, R)>> {
Ok(self.request_backend_sender.subscribe())
}
pub async fn request_streams(
&self,
) -> Result<broadcast::Receiver<IdmClientStreamResponseHandle<R>>> {
Ok(self.request_stream_backend_sender.subscribe())
}
pub async fn respond<T: IdmSerializable>(&self, id: u64, response: T) -> Result<()> {
let packet = IdmTransportPacket {
id,
form: IdmTransportPacketForm::Response.into(),
channel: self.channel,
data: response.encode()?,
};
self.tx_sender.send(packet).await?;
Ok(())
}
pub async fn subscribe(&self) -> Result<broadcast::Receiver<E>> {
Ok(self.event_receiver_sender.subscribe())
}
pub async fn send(&self, request: R) -> Result<R::Response> {
let (sender, receiver) = oneshot::channel::<R::Response>();
let req = {
let mut guard = self.next_request_id.lock().await;
let req = *guard;
*guard = req.wrapping_add(1);
req
};
let mut requests = self.requests.lock().await;
requests.insert(req, sender);
drop(requests);
let success = AtomicBool::new(false);
let _guard = scopeguard::guard(self.requests.clone(), |requests| {
if success.load(Ordering::Acquire) {
return;
}
tokio::task::spawn(async move {
let mut requests = requests.lock().await;
requests.remove(&req);
});
});
self.tx_sender
.send(IdmTransportPacket {
id: req,
channel: self.channel,
form: IdmTransportPacketForm::Request.into(),
data: request.encode()?,
})
.await?;
let response = timeout(Duration::from_secs(IDM_REQUEST_TIMEOUT_SECS), receiver).await??;
success.store(true, Ordering::Release);
Ok(response)
}
pub async fn send_stream(&self, request: R) -> Result<IdmClientStreamRequestHandle<R, E>> {
let (sender, receiver) = mpsc::channel::<R::Response>(100);
let req = {
let mut guard = self.next_request_id.lock().await;
let req = *guard;
*guard = req.wrapping_add(1);
req
};
let mut requests = self.request_streams.lock().await;
requests.insert(req, sender);
drop(requests);
self.tx_sender
.send(IdmTransportPacket {
id: req,
channel: self.channel,
form: IdmTransportPacketForm::StreamRequest.into(),
data: request.encode()?,
})
.await?;
Ok(IdmClientStreamRequestHandle {
id: req,
receiver,
client: self.clone(),
})
}
#[allow(clippy::too_many_arguments)]
async fn process( async fn process(
file: File, mut backend: Box<dyn IdmBackend>,
sender: Sender<IdmPacket>, channel: u64,
mut receiver: Receiver<IdmPacket>, tx_sender: Sender<IdmTransportPacket>,
event_sender: broadcast::Sender<E>,
requests: OneshotRequestMap<R>,
request_streams: StreamRequestMap<R>,
request_update_streams: StreamRequestUpdateMap<R>,
request_backend_sender: broadcast::Sender<(u64, R)>,
request_stream_backend_sender: broadcast::Sender<IdmClientStreamResponseHandle<R>>,
_event_receiver: broadcast::Receiver<E>,
mut receiver: Receiver<IdmTransportPacket>,
) -> Result<()> { ) -> Result<()> {
let mut file = AsyncFd::new(file)?;
loop { loop {
select! { select! {
x = file.readable_mut() => match x { x = backend.recv() => match x {
Ok(mut guard) => { Ok(packet) => {
let size = guard.get_inner_mut().read_u16_le().await?; if packet.channel != channel {
if size == 0 {
continue; continue;
} }
let mut buffer = BytesMut::with_capacity(size as usize);
guard.get_inner_mut().read_exact(&mut buffer).await?; match packet.form() {
match IdmPacket::decode(buffer) { IdmTransportPacketForm::Event => {
Ok(packet) => { if let Ok(event) = E::decode(&packet.data) {
sender.send(packet).await?; let _ = event_sender.send(event);
}
}, },
Err(error) => { IdmTransportPacketForm::Request => {
error!("received invalid idm packet: {}", error); if let Ok(request) = R::decode(&packet.data) {
let _ = request_backend_sender.send((packet.id, request));
}
},
IdmTransportPacketForm::Response => {
let mut requests = requests.lock().await;
if let Some(sender) = requests.remove(&packet.id) {
drop(requests);
if let Ok(response) = R::Response::decode(&packet.data) {
let _ = sender.send(response);
}
}
},
IdmTransportPacketForm::StreamRequest => {
if let Ok(request) = R::decode(&packet.data) {
let mut update_streams = request_update_streams.lock().await;
let (sender, receiver) = mpsc::channel(100);
update_streams.insert(packet.id, sender.clone());
let handle = IdmClientStreamResponseHandle {
initial: request,
id: packet.id,
channel,
tx_sender: tx_sender.clone(),
receiver: Arc::new(Mutex::new(Some(receiver))),
};
let _ = request_stream_backend_sender.send(handle);
}
} }
IdmTransportPacketForm::StreamRequestUpdate => {
if let Ok(request) = R::decode(&packet.data) {
let mut update_streams = request_update_streams.lock().await;
if let Some(stream) = update_streams.get_mut(&packet.id) {
let _ = stream.try_send(request);
}
}
}
IdmTransportPacketForm::StreamRequestClosed => {
let mut update_streams = request_update_streams.lock().await;
update_streams.remove(&packet.id);
}
IdmTransportPacketForm::StreamResponseUpdate => {
let requests = request_streams.lock().await;
if let Some(sender) = requests.get(&packet.id) {
if let Ok(response) = R::Response::decode(&packet.data) {
let _ = sender.try_send(response);
}
}
}
IdmTransportPacketForm::StreamResponseClosed => {
let mut requests = request_streams.lock().await;
requests.remove(&packet.id);
}
_ => {},
} }
}, },
@ -91,13 +483,12 @@ impl IdmClient {
}, },
x = receiver.recv() => match x { x = receiver.recv() => match x {
Some(packet) => { Some(packet) => {
let data = packet.encode_to_vec(); let length = packet.encoded_len();
if data.len() > u16::MAX as usize { if length > IDM_PACKET_MAX_SIZE {
error!("unable to send idm packet, packet size exceeded (tried to send {} bytes)", data.len()); error!("unable to send idm packet, packet size exceeded (tried to send {} bytes)", length);
continue; continue;
} }
file.get_mut().write_u16_le(data.len() as u16).await?; backend.send(packet).await?;
file.get_mut().write_all(&data).await?;
}, },
None => { None => {

View File

@ -0,0 +1,129 @@
use anyhow::Result;
use prost::Message;
use prost_types::{ListValue, Value};
use super::serialize::{IdmRequest, IdmSerializable};
include!(concat!(env!("OUT_DIR"), "/krata.idm.internal.rs"));
pub const INTERNAL_IDM_CHANNEL: u64 = 0;
impl IdmSerializable for Event {
fn encode(&self) -> Result<Vec<u8>> {
Ok(self.encode_to_vec())
}
fn decode(bytes: &[u8]) -> Result<Self> {
Ok(<Self as prost::Message>::decode(bytes)?)
}
}
impl IdmSerializable for Request {
fn encode(&self) -> Result<Vec<u8>> {
Ok(self.encode_to_vec())
}
fn decode(bytes: &[u8]) -> Result<Self> {
Ok(<Self as prost::Message>::decode(bytes)?)
}
}
impl IdmRequest for Request {
type Response = Response;
}
impl IdmSerializable for Response {
fn encode(&self) -> Result<Vec<u8>> {
Ok(self.encode_to_vec())
}
fn decode(bytes: &[u8]) -> Result<Self> {
Ok(<Self as prost::Message>::decode(bytes)?)
}
}
pub trait AsIdmMetricValue {
fn as_metric_value(&self) -> Value;
}
impl MetricNode {
pub fn structural<N: AsRef<str>>(name: N, children: Vec<MetricNode>) -> MetricNode {
MetricNode {
name: name.as_ref().to_string(),
value: None,
format: MetricFormat::Unknown.into(),
children,
}
}
pub fn raw_value<N: AsRef<str>, V: AsIdmMetricValue>(name: N, value: V) -> MetricNode {
MetricNode {
name: name.as_ref().to_string(),
value: Some(value.as_metric_value()),
format: MetricFormat::Unknown.into(),
children: vec![],
}
}
pub fn value<N: AsRef<str>, V: AsIdmMetricValue>(
name: N,
value: V,
format: MetricFormat,
) -> MetricNode {
MetricNode {
name: name.as_ref().to_string(),
value: Some(value.as_metric_value()),
format: format.into(),
children: vec![],
}
}
}
impl AsIdmMetricValue for String {
fn as_metric_value(&self) -> Value {
Value {
kind: Some(prost_types::value::Kind::StringValue(self.to_string())),
}
}
}
impl AsIdmMetricValue for &str {
fn as_metric_value(&self) -> Value {
Value {
kind: Some(prost_types::value::Kind::StringValue(self.to_string())),
}
}
}
impl AsIdmMetricValue for u64 {
fn as_metric_value(&self) -> Value {
numeric(*self as f64)
}
}
impl AsIdmMetricValue for i64 {
fn as_metric_value(&self) -> Value {
numeric(*self as f64)
}
}
impl AsIdmMetricValue for f64 {
fn as_metric_value(&self) -> Value {
numeric(*self)
}
}
impl<T: AsIdmMetricValue> AsIdmMetricValue for Vec<T> {
fn as_metric_value(&self) -> Value {
let values = self.iter().map(|x| x.as_metric_value()).collect::<_>();
Value {
kind: Some(prost_types::value::Kind::ListValue(ListValue { values })),
}
}
}
fn numeric(value: f64) -> Value {
Value {
kind: Some(prost_types::value::Kind::NumberValue(value)),
}
}

View File

@ -1,3 +1,5 @@
#[cfg(unix)] #[cfg(unix)]
pub mod client; pub mod client;
pub mod protocol; pub mod internal;
pub mod serialize;
pub mod transport;

View File

@ -1 +0,0 @@
include!(concat!(env!("OUT_DIR"), "/krata.internal.idm.rs"));

View File

@ -0,0 +1,10 @@
use anyhow::Result;
pub trait IdmSerializable: Sized + Clone + Send + Sync + 'static {
fn decode(bytes: &[u8]) -> Result<Self>;
fn encode(&self) -> Result<Vec<u8>>;
}
pub trait IdmRequest: IdmSerializable {
type Response: IdmSerializable;
}

View File

@ -0,0 +1 @@
include!(concat!(env!("OUT_DIR"), "/krata.idm.transport.rs"));

View File

@ -2,24 +2,30 @@ use std::collections::HashMap;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug, Clone)]
pub enum LaunchPackedFormat {
Squashfs,
Erofs,
}
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchNetworkIpv4 { pub struct LaunchNetworkIpv4 {
pub address: String, pub address: String,
pub gateway: String, pub gateway: String,
} }
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchNetworkIpv6 { pub struct LaunchNetworkIpv6 {
pub address: String, pub address: String,
pub gateway: String, pub gateway: String,
} }
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchNetworkResolver { pub struct LaunchNetworkResolver {
pub nameservers: Vec<String>, pub nameservers: Vec<String>,
} }
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchNetwork { pub struct LaunchNetwork {
pub link: String, pub link: String,
pub ipv4: LaunchNetworkIpv4, pub ipv4: LaunchNetworkIpv4,
@ -27,8 +33,14 @@ pub struct LaunchNetwork {
pub resolver: LaunchNetworkResolver, pub resolver: LaunchNetworkResolver,
} }
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchRoot {
pub format: LaunchPackedFormat,
}
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchInfo { pub struct LaunchInfo {
pub root: LaunchRoot,
pub hostname: Option<String>, pub hostname: Option<String>,
pub network: Option<LaunchNetwork>, pub network: Option<LaunchNetwork>,
pub env: HashMap<String, String>, pub env: HashMap<String, String>,

View File

@ -1 +1,2 @@
#![allow(clippy::all)]
tonic::include_proto!("krata.v1.common"); tonic::include_proto!("krata.v1.common");

View File

@ -1 +1,2 @@
#![allow(clippy::all)]
tonic::include_proto!("krata.v1.control"); tonic::include_proto!("krata.v1.control");

View File

@ -16,7 +16,7 @@ clap = { workspace = true }
env_logger = { workspace = true } env_logger = { workspace = true }
etherparse = { workspace = true } etherparse = { workspace = true }
futures = { workspace = true } futures = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" } krata = { path = "../krata", version = "^0.0.10" }
krata-advmac = { workspace = true } krata-advmac = { workspace = true }
libc = { workspace = true } libc = { workspace = true }
log = { workspace = true } log = { workspace = true }

View File

@ -14,11 +14,13 @@ async-compression = { workspace = true, features = ["tokio", "gzip", "zstd"] }
async-trait = { workspace = true } async-trait = { workspace = true }
backhand = { workspace = true } backhand = { workspace = true }
bytes = { workspace = true } bytes = { workspace = true }
indexmap = { workspace = true }
krata-tokio-tar = { workspace = true } krata-tokio-tar = { workspace = true }
log = { workspace = true } log = { workspace = true }
oci-spec = { workspace = true } oci-spec = { workspace = true }
path-clean = { workspace = true } path-clean = { workspace = true }
reqwest = { workspace = true } reqwest = { workspace = true }
scopeguard = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
sha256 = { workspace = true } sha256 = { workspace = true }

View File

@ -2,7 +2,12 @@ use std::{env::args, path::PathBuf};
use anyhow::Result; use anyhow::Result;
use env_logger::Env; use env_logger::Env;
use krataoci::{cache::ImageCache, compiler::ImageCompiler, name::ImageName}; use krataoci::{
name::ImageName,
packer::{service::OciPackerService, OciPackedFormat},
progress::OciProgressContext,
registry::OciPlatform,
};
use tokio::fs; use tokio::fs;
#[tokio::main] #[tokio::main]
@ -17,13 +22,27 @@ async fn main() -> Result<()> {
fs::create_dir(&cache_dir).await?; fs::create_dir(&cache_dir).await?;
} }
let cache = ImageCache::new(&cache_dir)?; let (context, mut receiver) = OciProgressContext::create();
let compiler = ImageCompiler::new(&cache, seed)?; tokio::task::spawn(async move {
let info = compiler.compile(&image).await?; loop {
if (receiver.changed().await).is_err() {
break;
}
let progress = receiver.borrow_and_update();
println!("phase {:?}", progress.phase);
for (id, layer) in &progress.layers {
println!("{} {:?} {:?}", id, layer.phase, layer.indication,)
}
}
});
let service = OciPackerService::new(seed, &cache_dir, OciPlatform::current()).await?;
let packed = service
.request(image.clone(), OciPackedFormat::Squashfs, false, context)
.await?;
println!( println!(
"generated squashfs of {} to {}", "generated squashfs of {} to {}",
image, image,
info.image_squashfs.to_string_lossy() packed.path.to_string_lossy()
); );
Ok(()) Ok(())
} }

273
crates/oci/src/assemble.rs Normal file
View File

@ -0,0 +1,273 @@
use crate::fetch::{OciImageFetcher, OciImageLayer, OciImageLayerReader, OciResolvedImage};
use crate::progress::OciBoundProgress;
use crate::schema::OciSchema;
use crate::vfs::{VfsNode, VfsTree};
use anyhow::{anyhow, Result};
use log::{debug, trace, warn};
use oci_spec::image::{Descriptor, ImageConfiguration, ImageManifest};
use std::path::{Path, PathBuf};
use std::pin::Pin;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use tokio::fs;
use tokio_stream::StreamExt;
use tokio_tar::{Archive, Entry};
use uuid::Uuid;
pub struct OciImageAssembled {
pub digest: String,
pub descriptor: Descriptor,
pub manifest: OciSchema<ImageManifest>,
pub config: OciSchema<ImageConfiguration>,
pub vfs: Arc<VfsTree>,
pub tmp_dir: Option<PathBuf>,
}
impl Drop for OciImageAssembled {
fn drop(&mut self) {
if let Some(tmp) = self.tmp_dir.clone() {
tokio::task::spawn(async move {
let _ = fs::remove_dir_all(&tmp).await;
});
}
}
}
pub struct OciImageAssembler {
downloader: OciImageFetcher,
resolved: Option<OciResolvedImage>,
progress: OciBoundProgress,
work_dir: PathBuf,
disk_dir: PathBuf,
tmp_dir: Option<PathBuf>,
success: AtomicBool,
}
impl OciImageAssembler {
pub async fn new(
downloader: OciImageFetcher,
resolved: OciResolvedImage,
progress: OciBoundProgress,
work_dir: Option<PathBuf>,
disk_dir: Option<PathBuf>,
) -> Result<OciImageAssembler> {
let tmp_dir = if work_dir.is_none() || disk_dir.is_none() {
let mut tmp_dir = std::env::temp_dir().clone();
tmp_dir.push(format!("oci-assemble-{}", Uuid::new_v4()));
Some(tmp_dir)
} else {
None
};
let work_dir = if let Some(work_dir) = work_dir {
work_dir
} else {
let mut tmp_dir = tmp_dir
.clone()
.ok_or(anyhow!("tmp_dir was not created when expected"))?;
tmp_dir.push("work");
tmp_dir
};
let target_dir = if let Some(target_dir) = disk_dir {
target_dir
} else {
let mut tmp_dir = tmp_dir
.clone()
.ok_or(anyhow!("tmp_dir was not created when expected"))?;
tmp_dir.push("image");
tmp_dir
};
fs::create_dir_all(&work_dir).await?;
fs::create_dir_all(&target_dir).await?;
Ok(OciImageAssembler {
downloader,
resolved: Some(resolved),
progress,
work_dir,
disk_dir: target_dir,
tmp_dir,
success: AtomicBool::new(false),
})
}
pub async fn assemble(self) -> Result<OciImageAssembled> {
debug!("assemble");
let mut layer_dir = self.work_dir.clone();
layer_dir.push("layer");
fs::create_dir_all(&layer_dir).await?;
self.assemble_with(&layer_dir).await
}
async fn assemble_with(mut self, layer_dir: &Path) -> Result<OciImageAssembled> {
let Some(ref resolved) = self.resolved else {
return Err(anyhow!("resolved image was not available when expected"));
};
let local = self.downloader.download(resolved, layer_dir).await?;
let mut vfs = VfsTree::new();
for layer in &local.layers {
debug!(
"process layer digest={} compression={:?}",
&layer.digest, layer.compression,
);
self.progress
.update(|progress| {
progress.start_extracting_layer(&layer.digest);
})
.await;
debug!("process layer digest={}", &layer.digest,);
let mut archive = layer.archive().await?;
let mut entries = archive.entries()?;
let mut count = 0u64;
let mut size = 0u64;
while let Some(entry) = entries.next().await {
let mut entry = entry?;
let path = entry.path()?;
let Some(name) = path.file_name() else {
continue;
};
let Some(name) = name.to_str() else {
continue;
};
if name.starts_with(".wh.") {
self.process_whiteout_entry(&mut vfs, &entry, name, layer)
.await?;
} else {
let reference = vfs.insert_tar_entry(&entry)?;
self.progress
.update(|progress| {
progress.extracting_layer(&layer.digest, &reference.name);
})
.await;
size += self
.process_write_entry(&mut vfs, &mut entry, layer)
.await?;
count += 1;
}
}
self.progress
.update(|progress| {
progress.extracted_layer(&layer.digest, count, size);
})
.await;
}
for layer in &local.layers {
if layer.path.exists() {
fs::remove_file(&layer.path).await?;
}
}
let Some(resolved) = self.resolved.take() else {
return Err(anyhow!("resolved image was not available when expected"));
};
let assembled = OciImageAssembled {
vfs: Arc::new(vfs),
descriptor: resolved.descriptor,
digest: resolved.digest,
manifest: resolved.manifest,
config: local.config,
tmp_dir: self.tmp_dir.clone(),
};
self.success.store(true, Ordering::Release);
Ok(assembled)
}
async fn process_whiteout_entry(
&self,
vfs: &mut VfsTree,
entry: &Entry<Archive<Pin<Box<dyn OciImageLayerReader + Send>>>>,
name: &str,
layer: &OciImageLayer,
) -> Result<()> {
let path = entry.path()?;
let mut path = path.to_path_buf();
path.pop();
let opaque = name == ".wh..wh..opq";
if !opaque {
let file = &name[4..];
path.push(file);
}
trace!(
"whiteout entry {:?} layer={} path={:?}",
entry.path()?,
&layer.digest,
path
);
let result = vfs.root.remove(&path);
if let Some((parent, mut removed)) = result {
delete_disk_paths(&removed).await?;
if opaque {
removed.children.clear();
parent.children.push(removed);
}
} else {
warn!(
"whiteout entry layer={} path={:?} did not exist",
&layer.digest, path
);
}
Ok(())
}
async fn process_write_entry(
&self,
vfs: &mut VfsTree,
entry: &mut Entry<Archive<Pin<Box<dyn OciImageLayerReader + Send>>>>,
layer: &OciImageLayer,
) -> Result<u64> {
if !entry.header().entry_type().is_file() {
return Ok(0);
}
trace!(
"unpack entry layer={} path={:?} type={:?}",
&layer.digest,
entry.path()?,
entry.header().entry_type(),
);
entry.set_preserve_permissions(false);
entry.set_unpack_xattrs(false);
entry.set_preserve_mtime(false);
let path = entry
.unpack_in(&self.disk_dir)
.await?
.ok_or(anyhow!("unpack did not return a path"))?;
vfs.set_disk_path(&entry.path()?, &path)?;
Ok(entry.header().size()?)
}
}
impl Drop for OciImageAssembler {
fn drop(&mut self) {
if !self.success.load(Ordering::Acquire) {
if let Some(tmp_dir) = self.tmp_dir.clone() {
tokio::task::spawn(async move {
let _ = fs::remove_dir_all(tmp_dir).await;
});
}
}
}
}
async fn delete_disk_paths(node: &VfsNode) -> Result<()> {
let mut queue = vec![node];
while !queue.is_empty() {
let node = queue.remove(0);
if let Some(ref disk_path) = node.disk_path {
if !disk_path.exists() {
warn!("disk path {:?} does not exist", disk_path);
}
fs::remove_file(disk_path).await?;
}
let children = node.children.iter().collect::<Vec<_>>();
queue.extend_from_slice(&children);
}
Ok(())
}

View File

@ -1,71 +0,0 @@
use super::compiler::ImageInfo;
use anyhow::Result;
use log::debug;
use oci_spec::image::{ImageConfiguration, ImageManifest};
use std::path::{Path, PathBuf};
use tokio::fs;
#[derive(Clone)]
pub struct ImageCache {
cache_dir: PathBuf,
}
impl ImageCache {
pub fn new(cache_dir: &Path) -> Result<ImageCache> {
Ok(ImageCache {
cache_dir: cache_dir.to_path_buf(),
})
}
pub async fn recall(&self, digest: &str) -> Result<Option<ImageInfo>> {
let mut squashfs_path = self.cache_dir.clone();
let mut config_path = self.cache_dir.clone();
let mut manifest_path = self.cache_dir.clone();
squashfs_path.push(format!("{}.squashfs", digest));
manifest_path.push(format!("{}.manifest.json", digest));
config_path.push(format!("{}.config.json", digest));
Ok(
if squashfs_path.exists() && manifest_path.exists() && config_path.exists() {
let squashfs_metadata = fs::metadata(&squashfs_path).await?;
let manifest_metadata = fs::metadata(&manifest_path).await?;
let config_metadata = fs::metadata(&config_path).await?;
if squashfs_metadata.is_file()
&& manifest_metadata.is_file()
&& config_metadata.is_file()
{
let manifest_text = fs::read_to_string(&manifest_path).await?;
let manifest: ImageManifest = serde_json::from_str(&manifest_text)?;
let config_text = fs::read_to_string(&config_path).await?;
let config: ImageConfiguration = serde_json::from_str(&config_text)?;
debug!("cache hit digest={}", digest);
Some(ImageInfo::new(squashfs_path.clone(), manifest, config)?)
} else {
None
}
} else {
debug!("cache miss digest={}", digest);
None
},
)
}
pub async fn store(&self, digest: &str, info: &ImageInfo) -> Result<ImageInfo> {
debug!("cache store digest={}", digest);
let mut squashfs_path = self.cache_dir.clone();
let mut manifest_path = self.cache_dir.clone();
let mut config_path = self.cache_dir.clone();
squashfs_path.push(format!("{}.squashfs", digest));
manifest_path.push(format!("{}.manifest.json", digest));
config_path.push(format!("{}.config.json", digest));
fs::copy(&info.image_squashfs, &squashfs_path).await?;
let manifest_text = serde_json::to_string_pretty(&info.manifest)?;
fs::write(&manifest_path, manifest_text).await?;
let config_text = serde_json::to_string_pretty(&info.config)?;
fs::write(&config_path, config_text).await?;
ImageInfo::new(
squashfs_path.clone(),
info.manifest.clone(),
info.config.clone(),
)
}
}

View File

@ -1,411 +0,0 @@
use crate::cache::ImageCache;
use crate::fetch::{OciImageDownloader, OciImageLayer};
use crate::name::ImageName;
use crate::registry::OciRegistryPlatform;
use anyhow::{anyhow, Result};
use backhand::compression::Compressor;
use backhand::{FilesystemCompressor, FilesystemWriter, NodeHeader};
use log::{debug, trace, warn};
use oci_spec::image::{ImageConfiguration, ImageManifest};
use std::borrow::Cow;
use std::fs::File;
use std::io::{BufWriter, ErrorKind, Read};
use std::os::unix::fs::{FileTypeExt, MetadataExt, PermissionsExt};
use std::path::{Path, PathBuf};
use std::pin::Pin;
use tokio::fs;
use tokio::io::AsyncRead;
use tokio_stream::StreamExt;
use tokio_tar::{Archive, Entry};
use uuid::Uuid;
use walkdir::WalkDir;
pub const IMAGE_SQUASHFS_VERSION: u64 = 2;
pub struct ImageInfo {
pub image_squashfs: PathBuf,
pub manifest: ImageManifest,
pub config: ImageConfiguration,
}
impl ImageInfo {
pub fn new(
squashfs: PathBuf,
manifest: ImageManifest,
config: ImageConfiguration,
) -> Result<ImageInfo> {
Ok(ImageInfo {
image_squashfs: squashfs,
manifest,
config,
})
}
}
pub struct ImageCompiler<'a> {
cache: &'a ImageCache,
seed: Option<PathBuf>,
}
impl ImageCompiler<'_> {
pub fn new(cache: &ImageCache, seed: Option<PathBuf>) -> Result<ImageCompiler> {
Ok(ImageCompiler { cache, seed })
}
pub async fn compile(&self, image: &ImageName) -> Result<ImageInfo> {
debug!("compile image={image}");
let mut tmp_dir = std::env::temp_dir().clone();
tmp_dir.push(format!("krata-compile-{}", Uuid::new_v4()));
let mut image_dir = tmp_dir.clone();
image_dir.push("image");
fs::create_dir_all(&image_dir).await?;
let mut layer_dir = tmp_dir.clone();
layer_dir.push("layer");
fs::create_dir_all(&layer_dir).await?;
let mut squash_file = tmp_dir.clone();
squash_file.push("image.squashfs");
let info = self
.download_and_compile(image, &layer_dir, &image_dir, &squash_file)
.await?;
fs::remove_dir_all(&tmp_dir).await?;
Ok(info)
}
async fn download_and_compile(
&self,
image: &ImageName,
layer_dir: &Path,
image_dir: &Path,
squash_file: &Path,
) -> Result<ImageInfo> {
let downloader = OciImageDownloader::new(
self.seed.clone(),
layer_dir.to_path_buf(),
OciRegistryPlatform::current(),
);
let resolved = downloader.resolve(image.clone()).await?;
let cache_key = format!(
"manifest={}:squashfs-version={}\n",
resolved.digest, IMAGE_SQUASHFS_VERSION
);
let cache_digest = sha256::digest(cache_key);
if let Some(cached) = self.cache.recall(&cache_digest).await? {
return Ok(cached);
}
let local = downloader.download(resolved).await?;
for layer in &local.layers {
debug!(
"process layer digest={} compression={:?}",
&layer.digest, layer.compression,
);
let whiteouts = self.process_layer_whiteout(layer, image_dir).await?;
debug!(
"process layer digest={} whiteouts={:?}",
&layer.digest, whiteouts
);
let mut archive = layer.archive().await?;
let mut entries = archive.entries()?;
while let Some(entry) = entries.next().await {
let mut entry = entry?;
let path = entry.path()?;
let mut maybe_whiteout_path_str =
path.to_str().map(|x| x.to_string()).unwrap_or_default();
if whiteouts.contains(&maybe_whiteout_path_str) {
continue;
}
maybe_whiteout_path_str.push('/');
if whiteouts.contains(&maybe_whiteout_path_str) {
continue;
}
let Some(name) = path.file_name() else {
return Err(anyhow!("unable to get file name"));
};
let Some(name) = name.to_str() else {
return Err(anyhow!("unable to get file name as string"));
};
if name.starts_with(".wh.") {
continue;
} else {
self.process_write_entry(&mut entry, layer, image_dir)
.await?;
}
}
}
for layer in &local.layers {
if layer.path.exists() {
fs::remove_file(&layer.path).await?;
}
}
self.squash(image_dir, squash_file)?;
let info = ImageInfo::new(
squash_file.to_path_buf(),
local.image.manifest,
local.config,
)?;
self.cache.store(&cache_digest, &info).await
}
async fn process_layer_whiteout(
&self,
layer: &OciImageLayer,
image_dir: &Path,
) -> Result<Vec<String>> {
let mut whiteouts = Vec::new();
let mut archive = layer.archive().await?;
let mut entries = archive.entries()?;
while let Some(entry) = entries.next().await {
let entry = entry?;
let path = entry.path()?;
let Some(name) = path.file_name() else {
return Err(anyhow!("unable to get file name"));
};
let Some(name) = name.to_str() else {
return Err(anyhow!("unable to get file name as string"));
};
if name.starts_with(".wh.") {
let path = self
.process_whiteout_entry(&entry, name, layer, image_dir)
.await?;
if let Some(path) = path {
whiteouts.push(path);
}
}
}
Ok(whiteouts)
}
async fn process_whiteout_entry(
&self,
entry: &Entry<Archive<Pin<Box<dyn AsyncRead + Send>>>>,
name: &str,
layer: &OciImageLayer,
image_dir: &Path,
) -> Result<Option<String>> {
let path = entry.path()?;
let mut dst = self.check_safe_entry(path.clone(), image_dir)?;
dst.pop();
let mut path = path.to_path_buf();
path.pop();
let opaque = name == ".wh..wh..opq";
if !opaque {
let file = &name[4..];
dst.push(file);
path.push(file);
self.check_safe_path(&dst, image_dir)?;
}
trace!("whiteout entry layer={} path={:?}", &layer.digest, path,);
let whiteout = path
.to_str()
.ok_or(anyhow!("unable to convert path to string"))?
.to_string();
if opaque {
if dst.is_dir() {
let mut reader = fs::read_dir(dst).await?;
while let Some(entry) = reader.next_entry().await? {
let path = entry.path();
if path.is_symlink() || path.is_file() {
fs::remove_file(&path).await?;
} else if path.is_dir() {
fs::remove_dir_all(&path).await?;
} else {
return Err(anyhow!("opaque whiteout entry did not exist"));
}
}
} else {
debug!(
"whiteout opaque entry missing locally layer={} path={:?} local={:?}",
&layer.digest,
entry.path()?,
dst,
);
}
} else if dst.is_file() || dst.is_symlink() {
fs::remove_file(&dst).await?;
} else if dst.is_dir() {
fs::remove_dir_all(&dst).await?;
} else {
debug!(
"whiteout entry missing locally layer={} path={:?} local={:?}",
&layer.digest,
entry.path()?,
dst,
);
}
Ok(if opaque { None } else { Some(whiteout) })
}
async fn process_write_entry(
&self,
entry: &mut Entry<Archive<Pin<Box<dyn AsyncRead + Send>>>>,
layer: &OciImageLayer,
image_dir: &Path,
) -> Result<()> {
let uid = entry.header().uid()?;
let gid = entry.header().gid()?;
trace!(
"unpack entry layer={} path={:?} type={:?} uid={} gid={}",
&layer.digest,
entry.path()?,
entry.header().entry_type(),
uid,
gid,
);
entry.set_preserve_mtime(true);
entry.set_preserve_permissions(true);
entry.set_unpack_xattrs(true);
if let Some(path) = entry.unpack_in(image_dir).await? {
if !path.is_symlink() {
std::os::unix::fs::chown(path, Some(uid as u32), Some(gid as u32))?;
}
}
Ok(())
}
fn check_safe_entry(&self, path: Cow<Path>, image_dir: &Path) -> Result<PathBuf> {
let mut dst = image_dir.to_path_buf();
dst.push(path);
if let Some(name) = dst.file_name() {
if let Some(name) = name.to_str() {
if name.starts_with(".wh.") {
let copy = dst.clone();
dst.pop();
self.check_safe_path(&dst, image_dir)?;
return Ok(copy);
}
}
}
self.check_safe_path(&dst, image_dir)?;
Ok(dst)
}
fn check_safe_path(&self, dst: &Path, image_dir: &Path) -> Result<()> {
let resolved = path_clean::clean(dst);
if !resolved.starts_with(image_dir) {
return Err(anyhow!("layer attempts to work outside image dir"));
}
Ok(())
}
fn squash(&self, image_dir: &Path, squash_file: &Path) -> Result<()> {
let mut writer = FilesystemWriter::default();
writer.set_compressor(FilesystemCompressor::new(Compressor::Gzip, None)?);
let walk = WalkDir::new(image_dir).follow_links(false);
for entry in walk {
let entry = entry?;
let rel = entry
.path()
.strip_prefix(image_dir)?
.to_str()
.ok_or_else(|| anyhow!("failed to strip prefix of tmpdir"))?;
let rel = format!("/{}", rel);
trace!("squash write {}", rel);
let typ = entry.file_type();
let metadata = std::fs::symlink_metadata(entry.path())?;
let uid = metadata.uid();
let gid = metadata.gid();
let mode = metadata.permissions().mode();
let mtime = metadata.mtime();
if rel == "/" {
writer.set_root_uid(uid);
writer.set_root_gid(gid);
writer.set_root_mode(mode as u16);
continue;
}
let header = NodeHeader {
permissions: mode as u16,
uid,
gid,
mtime: mtime as u32,
};
if typ.is_symlink() {
let symlink = std::fs::read_link(entry.path())?;
let symlink = symlink
.to_str()
.ok_or_else(|| anyhow!("failed to read symlink"))?;
writer.push_symlink(symlink, rel, header)?;
} else if typ.is_dir() {
writer.push_dir(rel, header)?;
} else if typ.is_file() {
writer.push_file(ConsumingFileReader::new(entry.path()), rel, header)?;
} else if typ.is_block_device() {
let device = metadata.dev();
writer.push_block_device(device as u32, rel, header)?;
} else if typ.is_char_device() {
let device = metadata.dev();
writer.push_char_device(device as u32, rel, header)?;
} else if typ.is_fifo() {
writer.push_fifo(rel, header)?;
} else if typ.is_socket() {
writer.push_socket(rel, header)?;
} else {
return Err(anyhow!("invalid file type"));
}
}
let squash_file_path = squash_file
.to_str()
.ok_or_else(|| anyhow!("failed to convert squashfs string"))?;
let file = File::create(squash_file)?;
let mut bufwrite = BufWriter::new(file);
trace!("squash generate: {}", squash_file_path);
writer.write(&mut bufwrite)?;
std::fs::remove_dir_all(image_dir)?;
Ok(())
}
}
struct ConsumingFileReader {
path: PathBuf,
file: Option<File>,
}
impl ConsumingFileReader {
fn new(path: &Path) -> ConsumingFileReader {
ConsumingFileReader {
path: path.to_path_buf(),
file: None,
}
}
}
impl Read for ConsumingFileReader {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
if self.file.is_none() {
self.file = Some(File::open(&self.path)?);
}
let Some(ref mut file) = self.file else {
return Err(std::io::Error::new(
ErrorKind::NotFound,
"file was not opened",
));
};
file.read(buf)
}
}
impl Drop for ConsumingFileReader {
fn drop(&mut self) {
let file = self.file.take();
drop(file);
if let Err(error) = std::fs::remove_file(&self.path) {
warn!("failed to delete consuming file {:?}: {}", self.path, error);
}
}
}

View File

@ -1,9 +1,17 @@
use crate::{
progress::{OciBoundProgress, OciProgressPhase},
schema::OciSchema,
};
use super::{ use super::{
name::ImageName, name::ImageName,
registry::{OciRegistryClient, OciRegistryPlatform}, registry::{OciPlatform, OciRegistryClient},
}; };
use std::{ use std::{
fmt::Debug,
io::SeekFrom,
os::unix::fs::MetadataExt,
path::{Path, PathBuf}, path::{Path, PathBuf},
pin::Pin, pin::Pin,
}; };
@ -12,20 +20,21 @@ use anyhow::{anyhow, Result};
use async_compression::tokio::bufread::{GzipDecoder, ZstdDecoder}; use async_compression::tokio::bufread::{GzipDecoder, ZstdDecoder};
use log::debug; use log::debug;
use oci_spec::image::{ use oci_spec::image::{
Descriptor, ImageConfiguration, ImageIndex, ImageManifest, MediaType, ToDockerV2S2, Descriptor, DescriptorBuilder, ImageConfiguration, ImageIndex, ImageManifest, MediaType,
ToDockerV2S2,
}; };
use serde::de::DeserializeOwned; use serde::de::DeserializeOwned;
use tokio::{ use tokio::{
fs::File, fs::{self, File},
io::{AsyncRead, AsyncReadExt, BufReader, BufWriter}, io::{AsyncRead, AsyncReadExt, AsyncSeekExt, BufReader, BufWriter},
}; };
use tokio_stream::StreamExt; use tokio_stream::StreamExt;
use tokio_tar::Archive; use tokio_tar::Archive;
pub struct OciImageDownloader { pub struct OciImageFetcher {
seed: Option<PathBuf>, seed: Option<PathBuf>,
storage: PathBuf, platform: OciPlatform,
platform: OciRegistryPlatform, progress: OciBoundProgress,
} }
#[derive(Clone, Debug, PartialEq, Eq)] #[derive(Clone, Debug, PartialEq, Eq)]
@ -37,16 +46,43 @@ pub enum OciImageLayerCompression {
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct OciImageLayer { pub struct OciImageLayer {
pub metadata: Descriptor,
pub path: PathBuf, pub path: PathBuf,
pub digest: String, pub digest: String,
pub compression: OciImageLayerCompression, pub compression: OciImageLayerCompression,
} }
#[async_trait::async_trait]
pub trait OciImageLayerReader: AsyncRead + Sync {
async fn position(&mut self) -> Result<u64>;
}
#[async_trait::async_trait]
impl OciImageLayerReader for BufReader<File> {
async fn position(&mut self) -> Result<u64> {
Ok(self.seek(SeekFrom::Current(0)).await?)
}
}
#[async_trait::async_trait]
impl OciImageLayerReader for GzipDecoder<BufReader<File>> {
async fn position(&mut self) -> Result<u64> {
self.get_mut().position().await
}
}
#[async_trait::async_trait]
impl OciImageLayerReader for ZstdDecoder<BufReader<File>> {
async fn position(&mut self) -> Result<u64> {
self.get_mut().position().await
}
}
impl OciImageLayer { impl OciImageLayer {
pub async fn decompress(&self) -> Result<Pin<Box<dyn AsyncRead + Send>>> { pub async fn decompress(&self) -> Result<Pin<Box<dyn OciImageLayerReader + Send>>> {
let file = File::open(&self.path).await?; let file = File::open(&self.path).await?;
let reader = BufReader::new(file); let reader = BufReader::new(file);
let reader: Pin<Box<dyn AsyncRead + Send>> = match self.compression { let reader: Pin<Box<dyn OciImageLayerReader + Send>> = match self.compression {
OciImageLayerCompression::None => Box::pin(reader), OciImageLayerCompression::None => Box::pin(reader),
OciImageLayerCompression::Gzip => Box::pin(GzipDecoder::new(reader)), OciImageLayerCompression::Gzip => Box::pin(GzipDecoder::new(reader)),
OciImageLayerCompression::Zstd => Box::pin(ZstdDecoder::new(reader)), OciImageLayerCompression::Zstd => Box::pin(ZstdDecoder::new(reader)),
@ -54,7 +90,7 @@ impl OciImageLayer {
Ok(reader) Ok(reader)
} }
pub async fn archive(&self) -> Result<Archive<Pin<Box<dyn AsyncRead + Send>>>> { pub async fn archive(&self) -> Result<Archive<Pin<Box<dyn OciImageLayerReader + Send>>>> {
let decompress = self.decompress().await?; let decompress = self.decompress().await?;
Ok(Archive::new(decompress)) Ok(Archive::new(decompress))
} }
@ -64,33 +100,34 @@ impl OciImageLayer {
pub struct OciResolvedImage { pub struct OciResolvedImage {
pub name: ImageName, pub name: ImageName,
pub digest: String, pub digest: String,
pub manifest: ImageManifest, pub descriptor: Descriptor,
pub manifest: OciSchema<ImageManifest>,
} }
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct OciLocalImage { pub struct OciLocalImage {
pub image: OciResolvedImage, pub image: OciResolvedImage,
pub config: ImageConfiguration, pub config: OciSchema<ImageConfiguration>,
pub layers: Vec<OciImageLayer>, pub layers: Vec<OciImageLayer>,
} }
impl OciImageDownloader { impl OciImageFetcher {
pub fn new( pub fn new(
seed: Option<PathBuf>, seed: Option<PathBuf>,
storage: PathBuf, platform: OciPlatform,
platform: OciRegistryPlatform, progress: OciBoundProgress,
) -> OciImageDownloader { ) -> OciImageFetcher {
OciImageDownloader { OciImageFetcher {
seed, seed,
storage,
platform, platform,
progress,
} }
} }
async fn load_seed_json_blob<T: DeserializeOwned>( async fn load_seed_json_blob<T: Clone + Debug + DeserializeOwned>(
&self, &self,
descriptor: &Descriptor, descriptor: &Descriptor,
) -> Result<Option<T>> { ) -> Result<Option<OciSchema<T>>> {
let digest = descriptor.digest(); let digest = descriptor.digest();
let Some((digest_type, digest_content)) = digest.split_once(':') else { let Some((digest_type, digest_content)) = digest.split_once(':') else {
return Err(anyhow!("digest content was not properly formatted")); return Err(anyhow!("digest content was not properly formatted"));
@ -99,7 +136,10 @@ impl OciImageDownloader {
self.load_seed_json(&want).await self.load_seed_json(&want).await
} }
async fn load_seed_json<T: DeserializeOwned>(&self, want: &str) -> Result<Option<T>> { async fn load_seed_json<T: Clone + Debug + DeserializeOwned>(
&self,
want: &str,
) -> Result<Option<OciSchema<T>>> {
let Some(ref seed) = self.seed else { let Some(ref seed) = self.seed else {
return Ok(None); return Ok(None);
}; };
@ -111,10 +151,10 @@ impl OciImageDownloader {
let mut entry = entry?; let mut entry = entry?;
let path = String::from_utf8(entry.path_bytes().to_vec())?; let path = String::from_utf8(entry.path_bytes().to_vec())?;
if path == want { if path == want {
let mut content = String::new(); let mut content = Vec::new();
entry.read_to_string(&mut content).await?; entry.read_to_end(&mut content).await?;
let data = serde_json::from_str::<T>(&content)?; let item = serde_json::from_slice::<T>(&content)?;
return Ok(Some(data)); return Ok(Some(OciSchema::new(content, item)));
} }
} }
Ok(None) Ok(None)
@ -152,7 +192,7 @@ impl OciImageDownloader {
if let Some(index) = self.load_seed_json::<ImageIndex>("index.json").await? { if let Some(index) = self.load_seed_json::<ImageIndex>("index.json").await? {
let mut found: Option<&Descriptor> = None; let mut found: Option<&Descriptor> = None;
for manifest in index.manifests() { for manifest in index.item().manifests() {
let Some(annotations) = manifest.annotations() else { let Some(annotations) = manifest.annotations() else {
continue; continue;
}; };
@ -177,6 +217,13 @@ impl OciImageDownloader {
continue; continue;
} }
} }
if let Some(ref digest) = image.digest {
if digest != manifest.digest() {
continue;
}
}
found = Some(manifest); found = Some(manifest);
break; break;
} }
@ -190,6 +237,7 @@ impl OciImageDownloader {
); );
return Ok(OciResolvedImage { return Ok(OciResolvedImage {
name: image, name: image,
descriptor: found.clone(),
digest: found.digest().clone(), digest: found.digest().clone(),
manifest, manifest,
}); });
@ -198,37 +246,79 @@ impl OciImageDownloader {
} }
let mut client = OciRegistryClient::new(image.registry_url()?, self.platform.clone())?; let mut client = OciRegistryClient::new(image.registry_url()?, self.platform.clone())?;
let (manifest, digest) = client let (manifest, descriptor, digest) = client
.get_manifest_with_digest(&image.name, &image.reference) .get_manifest_with_digest(&image.name, image.reference.as_ref(), image.digest.as_ref())
.await?; .await?;
let descriptor = descriptor.unwrap_or_else(|| {
DescriptorBuilder::default()
.media_type(MediaType::ImageManifest)
.size(manifest.raw().len() as i64)
.digest(digest.clone())
.build()
.unwrap()
});
Ok(OciResolvedImage { Ok(OciResolvedImage {
name: image, name: image,
descriptor,
digest, digest,
manifest, manifest,
}) })
} }
pub async fn download(&self, image: OciResolvedImage) -> Result<OciLocalImage> { pub async fn download(
let config: ImageConfiguration; &self,
image: &OciResolvedImage,
layer_dir: &Path,
) -> Result<OciLocalImage> {
let config: OciSchema<ImageConfiguration>;
self.progress
.update(|progress| {
progress.phase = OciProgressPhase::ConfigDownload;
})
.await;
let mut client = OciRegistryClient::new(image.name.registry_url()?, self.platform.clone())?; let mut client = OciRegistryClient::new(image.name.registry_url()?, self.platform.clone())?;
if let Some(seeded) = self if let Some(seeded) = self
.load_seed_json_blob::<ImageConfiguration>(image.manifest.config()) .load_seed_json_blob::<ImageConfiguration>(image.manifest.item().config())
.await? .await?
{ {
config = seeded; config = seeded;
} else { } else {
let config_bytes = client let config_bytes = client
.get_blob(&image.name.name, image.manifest.config()) .get_blob(&image.name.name, image.manifest.item().config())
.await?; .await?;
config = serde_json::from_slice(&config_bytes)?; config = OciSchema::new(
config_bytes.to_vec(),
serde_json::from_slice(&config_bytes)?,
);
} }
self.progress
.update(|progress| {
progress.phase = OciProgressPhase::LayerDownload;
for layer in image.manifest.item().layers() {
progress.add_layer(layer.digest());
}
})
.await;
let mut layers = Vec::new(); let mut layers = Vec::new();
for layer in image.manifest.layers() { for layer in image.manifest.item().layers() {
layers.push(self.acquire_layer(&image.name, layer, &mut client).await?); self.progress
.update(|progress| {
progress.downloading_layer(layer.digest(), 0, layer.size() as u64);
})
.await;
layers.push(
self.acquire_layer(&image.name, layer, layer_dir, &mut client)
.await?,
);
self.progress
.update(|progress| {
progress.downloaded_layer(layer.digest(), layer.size() as u64);
})
.await;
} }
Ok(OciLocalImage { Ok(OciLocalImage {
image, image: image.clone(),
config, config,
layers, layers,
}) })
@ -238,6 +328,7 @@ impl OciImageDownloader {
&self, &self,
image: &ImageName, image: &ImageName,
layer: &Descriptor, layer: &Descriptor,
layer_dir: &Path,
client: &mut OciRegistryClient, client: &mut OciRegistryClient,
) -> Result<OciImageLayer> { ) -> Result<OciImageLayer> {
debug!( debug!(
@ -245,13 +336,15 @@ impl OciImageDownloader {
layer.digest(), layer.digest(),
layer.size() layer.size()
); );
let mut layer_path = self.storage.clone(); let mut layer_path = layer_dir.to_path_buf();
layer_path.push(format!("{}.layer", layer.digest())); layer_path.push(format!("{}.layer", layer.digest()));
let seeded = self.extract_seed_blob(layer, &layer_path).await?; let seeded = self.extract_seed_blob(layer, &layer_path).await?;
if !seeded { if !seeded {
let file = File::create(&layer_path).await?; let file = File::create(&layer_path).await?;
let size = client.write_blob_to_file(&image.name, layer, file).await?; let size = client
.write_blob_to_file(&image.name, layer, file, Some(self.progress.clone()))
.await?;
if layer.size() as u64 != size { if layer.size() as u64 != size {
return Err(anyhow!( return Err(anyhow!(
"downloaded layer size differs from size in manifest", "downloaded layer size differs from size in manifest",
@ -259,6 +352,12 @@ impl OciImageDownloader {
} }
} }
let metadata = fs::metadata(&layer_path).await?;
if layer.size() as u64 != metadata.size() {
return Err(anyhow!("layer size differs from size in manifest",));
}
let mut media_type = layer.media_type().clone(); let mut media_type = layer.media_type().clone();
// docker layer compatibility // docker layer compatibility
@ -273,6 +372,7 @@ impl OciImageDownloader {
other => return Err(anyhow!("found layer with unknown media type: {}", other)), other => return Err(anyhow!("found layer with unknown media type: {}", other)),
}; };
Ok(OciImageLayer { Ok(OciImageLayer {
metadata: layer.clone(),
path: layer_path, path: layer_path,
digest: layer.digest().clone(), digest: layer.digest().clone(),
compression, compression,

View File

@ -1,5 +1,8 @@
pub mod cache; pub mod assemble;
pub mod compiler;
pub mod fetch; pub mod fetch;
pub mod name; pub mod name;
pub mod packer;
pub mod progress;
pub mod registry; pub mod registry;
pub mod schema;
pub mod vfs;

View File

@ -2,27 +2,39 @@ use anyhow::Result;
use std::fmt; use std::fmt;
use url::Url; use url::Url;
const DOCKER_HUB_MIRROR: &str = "mirror.gcr.io";
const DEFAULT_IMAGE_TAG: &str = "latest";
#[derive(Debug, Clone, PartialEq, Eq, Hash)] #[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ImageName { pub struct ImageName {
pub hostname: String, pub hostname: String,
pub port: Option<u16>, pub port: Option<u16>,
pub name: String, pub name: String,
pub reference: String, pub reference: Option<String>,
pub digest: Option<String>,
} }
impl fmt::Display for ImageName { impl fmt::Display for ImageName {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if let Some(port) = self.port { let mut suffix = String::new();
write!(
f, if let Some(ref reference) = self.reference {
"{}:{}/{}:{}", suffix.push(':');
self.hostname, port, self.name, self.reference suffix.push_str(reference);
) }
if let Some(ref digest) = self.digest {
suffix.push('@');
suffix.push_str(digest);
}
if ImageName::DOCKER_HUB_MIRROR == self.hostname && self.port.is_none() {
if self.name.starts_with("library/") {
write!(f, "{}{}", &self.name[8..], suffix)
} else {
write!(f, "{}{}", self.name, suffix)
}
} else if let Some(port) = self.port {
write!(f, "{}:{}/{}{}", self.hostname, port, self.name, suffix)
} else { } else {
write!(f, "{}/{}:{}", self.hostname, self.name, self.reference) write!(f, "{}/{}{}", self.hostname, self.name, suffix)
} }
} }
} }
@ -35,13 +47,21 @@ impl Default for ImageName {
} }
impl ImageName { impl ImageName {
pub const DOCKER_HUB_MIRROR: &'static str = "registry.docker.io";
pub const DEFAULT_IMAGE_TAG: &'static str = "latest";
pub fn parse(name: &str) -> Result<Self> { pub fn parse(name: &str) -> Result<Self> {
let full_name = name.to_string(); let full_name = name.to_string();
let name = full_name.clone(); let name = full_name.clone();
let (mut hostname, mut name) = name let (mut hostname, mut name) = name
.split_once('/') .split_once('/')
.map(|x| (x.0.to_string(), x.1.to_string())) .map(|x| (x.0.to_string(), x.1.to_string()))
.unwrap_or_else(|| (DOCKER_HUB_MIRROR.to_string(), format!("library/{}", name))); .unwrap_or_else(|| {
(
ImageName::DOCKER_HUB_MIRROR.to_string(),
format!("library/{}", name),
)
});
// heuristic to find any docker hub image formats // heuristic to find any docker hub image formats
// that may be in the hostname format. for example: // that may be in the hostname format. for example:
@ -49,7 +69,7 @@ impl ImageName {
// and neither will abc/hello/xyz:latest // and neither will abc/hello/xyz:latest
if !hostname.contains('.') && full_name.chars().filter(|x| *x == '/').count() == 1 { if !hostname.contains('.') && full_name.chars().filter(|x| *x == '/').count() == 1 {
name = format!("{}/{}", hostname, name); name = format!("{}/{}", hostname, name);
hostname = DOCKER_HUB_MIRROR.to_string(); hostname = ImageName::DOCKER_HUB_MIRROR.to_string();
} }
let (hostname, port) = if let Some((hostname, port)) = hostname let (hostname, port) = if let Some((hostname, port)) = hostname
@ -60,15 +80,54 @@ impl ImageName {
} else { } else {
(hostname, None) (hostname, None)
}; };
let (name, reference) = name
.split_once(':') let name_has_digest = if name.contains('@') {
.map(|x| (x.0.to_string(), x.1.to_string())) let digest_start = name.chars().position(|c| c == '@');
.unwrap_or((name.to_string(), DEFAULT_IMAGE_TAG.to_string())); let ref_start = name.chars().position(|c| c == ':');
if let (Some(digest_start), Some(ref_start)) = (digest_start, ref_start) {
digest_start < ref_start
} else {
true
}
} else {
false
};
let (name, digest) = if name_has_digest {
name.split_once('@')
.map(|(name, digest)| (name.to_string(), Some(digest.to_string())))
.unwrap_or_else(|| (name, None))
} else {
(name, None)
};
let (name, reference) = if name.contains(':') {
name.split_once(':')
.map(|(name, reference)| (name.to_string(), Some(reference.to_string())))
.unwrap_or((name, None))
} else {
(name, None)
};
let (reference, digest) = if let Some(reference) = reference {
if let Some(digest) = digest {
(Some(reference), Some(digest))
} else {
reference
.split_once('@')
.map(|(reff, digest)| (Some(reff.to_string()), Some(digest.to_string())))
.unwrap_or_else(|| (Some(reference), None))
}
} else {
(None, digest)
};
Ok(ImageName { Ok(ImageName {
hostname, hostname,
port, port,
name, name,
reference, reference,
digest,
}) })
} }

View File

@ -0,0 +1,236 @@
use std::{os::unix::fs::MetadataExt, path::Path, process::Stdio, sync::Arc};
use super::OciPackedFormat;
use crate::{progress::OciBoundProgress, vfs::VfsTree};
use anyhow::{anyhow, Result};
use log::warn;
use tokio::{
fs::{self, File},
io::BufWriter,
pin,
process::{Child, Command},
select,
};
#[derive(Debug, Clone, Copy)]
pub enum OciPackerBackendType {
MkSquashfs,
MkfsErofs,
Tar,
}
impl OciPackerBackendType {
pub fn format(&self) -> OciPackedFormat {
match self {
OciPackerBackendType::MkSquashfs => OciPackedFormat::Squashfs,
OciPackerBackendType::MkfsErofs => OciPackedFormat::Erofs,
OciPackerBackendType::Tar => OciPackedFormat::Tar,
}
}
pub fn create(&self) -> Box<dyn OciPackerBackend> {
match self {
OciPackerBackendType::MkSquashfs => {
Box::new(OciPackerMkSquashfs {}) as Box<dyn OciPackerBackend>
}
OciPackerBackendType::MkfsErofs => {
Box::new(OciPackerMkfsErofs {}) as Box<dyn OciPackerBackend>
}
OciPackerBackendType::Tar => Box::new(OciPackerTar {}) as Box<dyn OciPackerBackend>,
}
}
}
#[async_trait::async_trait]
pub trait OciPackerBackend: Send + Sync {
async fn pack(&self, progress: OciBoundProgress, vfs: Arc<VfsTree>, file: &Path) -> Result<()>;
}
pub struct OciPackerMkSquashfs {}
#[async_trait::async_trait]
impl OciPackerBackend for OciPackerMkSquashfs {
async fn pack(&self, progress: OciBoundProgress, vfs: Arc<VfsTree>, file: &Path) -> Result<()> {
progress
.update(|progress| {
progress.start_packing();
})
.await;
let child = Command::new("mksquashfs")
.arg("-")
.arg(file)
.arg("-comp")
.arg("gzip")
.arg("-tar")
.stdin(Stdio::piped())
.stderr(Stdio::null())
.stdout(Stdio::null())
.spawn()?;
let mut child = ChildProcessKillGuard(child);
let stdin = child
.0
.stdin
.take()
.ok_or(anyhow!("unable to acquire stdin stream"))?;
let mut writer = Some(tokio::task::spawn(async move {
if let Err(error) = vfs.write_to_tar(stdin).await {
warn!("failed to write tar: {}", error);
return Err(error);
}
Ok(())
}));
let wait = child.0.wait();
pin!(wait);
let status_result = loop {
if let Some(inner) = writer.as_mut() {
select! {
x = inner => {
writer = None;
match x {
Ok(_) => {},
Err(error) => {
return Err(error.into());
}
}
},
status = &mut wait => {
break status;
}
};
} else {
select! {
status = &mut wait => {
break status;
}
};
}
};
if let Some(writer) = writer {
writer.await??;
}
let status = status_result?;
if !status.success() {
Err(anyhow!(
"mksquashfs failed with exit code: {}",
status.code().unwrap()
))
} else {
let metadata = fs::metadata(&file).await?;
progress
.update(|progress| progress.complete(metadata.size()))
.await;
Ok(())
}
}
}
pub struct OciPackerMkfsErofs {}
#[async_trait::async_trait]
impl OciPackerBackend for OciPackerMkfsErofs {
async fn pack(&self, progress: OciBoundProgress, vfs: Arc<VfsTree>, file: &Path) -> Result<()> {
progress
.update(|progress| {
progress.start_packing();
})
.await;
let child = Command::new("mkfs.erofs")
.arg("-L")
.arg("root")
.arg("--tar=-")
.arg(file)
.stdin(Stdio::piped())
.stderr(Stdio::null())
.stdout(Stdio::null())
.spawn()?;
let mut child = ChildProcessKillGuard(child);
let stdin = child
.0
.stdin
.take()
.ok_or(anyhow!("unable to acquire stdin stream"))?;
let mut writer = Some(tokio::task::spawn(
async move { vfs.write_to_tar(stdin).await },
));
let wait = child.0.wait();
pin!(wait);
let status_result = loop {
if let Some(inner) = writer.as_mut() {
select! {
x = inner => {
match x {
Ok(_) => {
writer = None;
},
Err(error) => {
return Err(error.into());
}
}
},
status = &mut wait => {
break status;
}
};
} else {
select! {
status = &mut wait => {
break status;
}
};
}
};
if let Some(writer) = writer {
writer.await??;
}
let status = status_result?;
if !status.success() {
Err(anyhow!(
"mkfs.erofs failed with exit code: {}",
status.code().unwrap()
))
} else {
let metadata = fs::metadata(&file).await?;
progress
.update(|progress| {
progress.complete(metadata.size());
})
.await;
Ok(())
}
}
}
pub struct OciPackerTar {}
#[async_trait::async_trait]
impl OciPackerBackend for OciPackerTar {
async fn pack(&self, progress: OciBoundProgress, vfs: Arc<VfsTree>, file: &Path) -> Result<()> {
progress
.update(|progress| {
progress.start_packing();
})
.await;
let output = File::create(file).await?;
let output = BufWriter::new(output);
vfs.write_to_tar(output).await?;
let metadata = fs::metadata(file).await?;
progress
.update(|progress| {
progress.complete(metadata.size());
})
.await;
Ok(())
}
}
struct ChildProcessKillGuard(Child);
impl Drop for ChildProcessKillGuard {
fn drop(&mut self) {
let _ = self.0.start_kill();
}
}

View File

@ -0,0 +1,220 @@
use crate::{
name::ImageName,
packer::{OciPackedFormat, OciPackedImage},
schema::OciSchema,
};
use anyhow::Result;
use log::{debug, error};
use oci_spec::image::{
Descriptor, ImageConfiguration, ImageIndex, ImageIndexBuilder, ImageManifest, MediaType,
ANNOTATION_REF_NAME,
};
use std::{
path::{Path, PathBuf},
sync::Arc,
};
use tokio::{fs, sync::RwLock};
#[derive(Clone)]
pub struct OciPackerCache {
cache_dir: PathBuf,
index: Arc<RwLock<ImageIndex>>,
}
const ANNOTATION_IMAGE_NAME: &str = "io.containerd.image.name";
const ANNOTATION_OCI_PACKER_FORMAT: &str = "dev.krata.oci.packer.format";
impl OciPackerCache {
pub async fn new(cache_dir: &Path) -> Result<OciPackerCache> {
let index = ImageIndexBuilder::default()
.schema_version(2u32)
.media_type(MediaType::ImageIndex)
.manifests(Vec::new())
.build()?;
let cache = OciPackerCache {
cache_dir: cache_dir.to_path_buf(),
index: Arc::new(RwLock::new(index)),
};
{
let mut mutex = cache.index.write().await;
*mutex = cache.load_index().await?;
}
Ok(cache)
}
pub async fn list(&self) -> Result<Vec<Descriptor>> {
let index = self.index.read().await;
Ok(index.manifests().clone())
}
pub async fn recall(
&self,
name: ImageName,
digest: &str,
format: OciPackedFormat,
) -> Result<Option<OciPackedImage>> {
let index = self.index.read().await;
let mut descriptor: Option<Descriptor> = None;
for manifest in index.manifests() {
if manifest.digest() == digest
&& manifest
.annotations()
.as_ref()
.and_then(|x| x.get(ANNOTATION_OCI_PACKER_FORMAT))
.map(|x| x.as_str())
== Some(format.extension())
{
descriptor = Some(manifest.clone());
break;
}
}
let Some(descriptor) = descriptor else {
return Ok(None);
};
let mut fs_path = self.cache_dir.clone();
let mut config_path = self.cache_dir.clone();
let mut manifest_path = self.cache_dir.clone();
fs_path.push(format!("{}.{}", digest, format.extension()));
manifest_path.push(format!("{}.manifest.json", digest));
config_path.push(format!("{}.config.json", digest));
if fs_path.exists() && manifest_path.exists() && config_path.exists() {
let image_metadata = fs::metadata(&fs_path).await?;
let manifest_metadata = fs::metadata(&manifest_path).await?;
let config_metadata = fs::metadata(&config_path).await?;
if image_metadata.is_file() && manifest_metadata.is_file() && config_metadata.is_file()
{
let manifest_bytes = fs::read(&manifest_path).await?;
let manifest: ImageManifest = serde_json::from_slice(&manifest_bytes)?;
let config_bytes = fs::read(&config_path).await?;
let config: ImageConfiguration = serde_json::from_slice(&config_bytes)?;
debug!("cache hit digest={}", digest);
Ok(Some(OciPackedImage::new(
name,
digest.to_string(),
fs_path.clone(),
format,
descriptor,
OciSchema::new(config_bytes, config),
OciSchema::new(manifest_bytes, manifest),
)))
} else {
Ok(None)
}
} else {
debug!("cache miss digest={}", digest);
Ok(None)
}
}
pub async fn store(&self, packed: OciPackedImage) -> Result<OciPackedImage> {
let mut index = self.index.write().await;
let mut manifests = index.manifests().clone();
debug!("cache store digest={}", packed.digest);
let mut fs_path = self.cache_dir.clone();
let mut manifest_path = self.cache_dir.clone();
let mut config_path = self.cache_dir.clone();
fs_path.push(format!("{}.{}", packed.digest, packed.format.extension()));
manifest_path.push(format!("{}.manifest.json", packed.digest));
config_path.push(format!("{}.config.json", packed.digest));
if fs::rename(&packed.path, &fs_path).await.is_err() {
fs::copy(&packed.path, &fs_path).await?;
fs::remove_file(&packed.path).await?;
}
fs::write(&config_path, packed.config.raw()).await?;
fs::write(&manifest_path, packed.manifest.raw()).await?;
manifests.retain(|item| {
if item.digest() != &packed.digest {
return true;
}
let Some(format) = item
.annotations()
.as_ref()
.and_then(|x| x.get(ANNOTATION_OCI_PACKER_FORMAT))
.map(|x| x.as_str())
else {
return true;
};
if format != packed.format.extension() {
return true;
}
false
});
let mut descriptor = packed.descriptor.clone();
let mut annotations = descriptor.annotations().clone().unwrap_or_default();
annotations.insert(
ANNOTATION_OCI_PACKER_FORMAT.to_string(),
packed.format.extension().to_string(),
);
let image_name = packed.name.to_string();
annotations.insert(ANNOTATION_IMAGE_NAME.to_string(), image_name);
let image_ref = packed.name.reference.clone();
if let Some(image_ref) = image_ref {
annotations.insert(ANNOTATION_REF_NAME.to_string(), image_ref);
}
descriptor.set_annotations(Some(annotations));
manifests.push(descriptor.clone());
index.set_manifests(manifests);
self.save_index(&index).await?;
let packed = OciPackedImage::new(
packed.name,
packed.digest,
fs_path.clone(),
packed.format,
descriptor,
packed.config,
packed.manifest,
);
Ok(packed)
}
async fn save_empty_index(&self) -> Result<ImageIndex> {
let index = ImageIndexBuilder::default()
.schema_version(2u32)
.media_type(MediaType::ImageIndex)
.manifests(Vec::new())
.build()?;
self.save_index(&index).await?;
Ok(index)
}
async fn load_index(&self) -> Result<ImageIndex> {
let mut index_path = self.cache_dir.clone();
index_path.push("index.json");
if !index_path.exists() {
self.save_empty_index().await?;
}
let content = fs::read_to_string(&index_path).await?;
let index = match serde_json::from_str::<ImageIndex>(&content) {
Ok(index) => index,
Err(error) => {
error!("image index was corrupted, creating a new one: {}", error);
self.save_empty_index().await?
}
};
Ok(index)
}
async fn save_index(&self, index: &ImageIndex) -> Result<()> {
let mut encoded = serde_json::to_string_pretty(index)?;
encoded.push('\n');
let mut index_path = self.cache_dir.clone();
index_path.push("index.json");
fs::write(&index_path, encoded).await?;
Ok(())
}
}

View File

@ -0,0 +1,69 @@
use std::path::PathBuf;
use crate::{name::ImageName, schema::OciSchema};
use self::backend::OciPackerBackendType;
use oci_spec::image::{Descriptor, ImageConfiguration, ImageManifest};
pub mod backend;
pub mod cache;
pub mod service;
#[derive(Debug, Default, Clone, Copy, Eq, PartialEq, Hash)]
pub enum OciPackedFormat {
#[default]
Squashfs,
Erofs,
Tar,
}
impl OciPackedFormat {
pub fn extension(&self) -> &str {
match self {
OciPackedFormat::Squashfs => "squashfs",
OciPackedFormat::Erofs => "erofs",
OciPackedFormat::Tar => "tar",
}
}
pub fn backend(&self) -> OciPackerBackendType {
match self {
OciPackedFormat::Squashfs => OciPackerBackendType::MkSquashfs,
OciPackedFormat::Erofs => OciPackerBackendType::MkfsErofs,
OciPackedFormat::Tar => OciPackerBackendType::Tar,
}
}
}
#[derive(Clone)]
pub struct OciPackedImage {
pub name: ImageName,
pub digest: String,
pub path: PathBuf,
pub format: OciPackedFormat,
pub descriptor: Descriptor,
pub config: OciSchema<ImageConfiguration>,
pub manifest: OciSchema<ImageManifest>,
}
impl OciPackedImage {
pub fn new(
name: ImageName,
digest: String,
path: PathBuf,
format: OciPackedFormat,
descriptor: Descriptor,
config: OciSchema<ImageConfiguration>,
manifest: OciSchema<ImageManifest>,
) -> OciPackedImage {
OciPackedImage {
name,
digest,
path,
format,
descriptor,
config,
manifest,
}
}
}

View File

@ -0,0 +1,278 @@
use std::{
collections::{hash_map::Entry, HashMap},
fmt::Display,
path::{Path, PathBuf},
sync::Arc,
};
use anyhow::{anyhow, Result};
use oci_spec::image::Descriptor;
use tokio::{
sync::{watch, Mutex},
task::JoinHandle,
};
use crate::{
assemble::OciImageAssembler,
fetch::{OciImageFetcher, OciResolvedImage},
name::ImageName,
progress::{OciBoundProgress, OciProgress, OciProgressContext},
registry::OciPlatform,
};
use log::{error, info, warn};
use super::{cache::OciPackerCache, OciPackedFormat, OciPackedImage};
pub struct OciPackerTask {
progress: OciBoundProgress,
watch: watch::Sender<Option<Result<OciPackedImage>>>,
task: JoinHandle<()>,
}
#[derive(Clone)]
pub struct OciPackerService {
seed: Option<PathBuf>,
platform: OciPlatform,
cache: OciPackerCache,
tasks: Arc<Mutex<HashMap<OciPackerTaskKey, OciPackerTask>>>,
}
impl OciPackerService {
pub async fn new(
seed: Option<PathBuf>,
cache_dir: &Path,
platform: OciPlatform,
) -> Result<OciPackerService> {
Ok(OciPackerService {
seed,
cache: OciPackerCache::new(cache_dir).await?,
platform,
tasks: Arc::new(Mutex::new(HashMap::new())),
})
}
pub async fn list(&self) -> Result<Vec<Descriptor>> {
self.cache.list().await
}
pub async fn recall(
&self,
digest: &str,
format: OciPackedFormat,
) -> Result<Option<OciPackedImage>> {
if digest.contains('/') || digest.contains('\\') || digest.contains("..") {
return Ok(None);
}
self.cache
.recall(ImageName::parse("cached:latest")?, digest, format)
.await
}
pub async fn request(
&self,
name: ImageName,
format: OciPackedFormat,
overwrite: bool,
progress_context: OciProgressContext,
) -> Result<OciPackedImage> {
let progress = OciProgress::new();
let progress = OciBoundProgress::new(progress_context.clone(), progress);
let fetcher =
OciImageFetcher::new(self.seed.clone(), self.platform.clone(), progress.clone());
let resolved = fetcher.resolve(name.clone()).await?;
let key = OciPackerTaskKey {
digest: resolved.digest.clone(),
format,
};
let (progress_copy_task, mut receiver) = match self.tasks.lock().await.entry(key.clone()) {
Entry::Occupied(entry) => {
let entry = entry.get();
(
Some(entry.progress.also_update(progress_context).await),
entry.watch.subscribe(),
)
}
Entry::Vacant(entry) => {
let task = self
.clone()
.launch(
name,
key.clone(),
format,
overwrite,
resolved,
fetcher,
progress.clone(),
)
.await;
let (watch, receiver) = watch::channel(None);
let task = OciPackerTask {
progress: progress.clone(),
task,
watch,
};
entry.insert(task);
(None, receiver)
}
};
let _progress_task_guard = scopeguard::guard(progress_copy_task, |task| {
if let Some(task) = task {
task.abort();
}
});
let _task_cancel_guard = scopeguard::guard(self.clone(), |service| {
service.maybe_cancel_task(key);
});
loop {
receiver.changed().await?;
let current = receiver.borrow_and_update();
if current.is_some() {
return current
.as_ref()
.map(|x| x.as_ref().map_err(|err| anyhow!("{}", err)).cloned())
.unwrap();
}
}
}
#[allow(clippy::too_many_arguments)]
async fn launch(
self,
name: ImageName,
key: OciPackerTaskKey,
format: OciPackedFormat,
overwrite: bool,
resolved: OciResolvedImage,
fetcher: OciImageFetcher,
progress: OciBoundProgress,
) -> JoinHandle<()> {
info!("started packer task {}", key);
tokio::task::spawn(async move {
let _task_drop_guard =
scopeguard::guard((key.clone(), self.clone()), |(key, service)| {
service.ensure_task_gone(key);
});
if let Err(error) = self
.task(
name,
key.clone(),
format,
overwrite,
resolved,
fetcher,
progress,
)
.await
{
self.finish(&key, Err(error)).await;
}
})
}
#[allow(clippy::too_many_arguments)]
async fn task(
&self,
name: ImageName,
key: OciPackerTaskKey,
format: OciPackedFormat,
overwrite: bool,
resolved: OciResolvedImage,
fetcher: OciImageFetcher,
progress: OciBoundProgress,
) -> Result<()> {
if !overwrite {
if let Some(cached) = self
.cache
.recall(name.clone(), &resolved.digest, format)
.await?
{
self.finish(&key, Ok(cached)).await;
return Ok(());
}
}
let assembler =
OciImageAssembler::new(fetcher, resolved, progress.clone(), None, None).await?;
let assembled = assembler.assemble().await?;
let mut file = assembled
.tmp_dir
.clone()
.ok_or(anyhow!("tmp_dir was missing when packing image"))?;
file.push("image.pack");
let target = file.clone();
let packer = format.backend().create();
packer
.pack(progress, assembled.vfs.clone(), &target)
.await?;
let packed = OciPackedImage::new(
name,
assembled.digest.clone(),
file,
format,
assembled.descriptor.clone(),
assembled.config.clone(),
assembled.manifest.clone(),
);
let packed = self.cache.store(packed).await?;
self.finish(&key, Ok(packed)).await;
Ok(())
}
async fn finish(&self, key: &OciPackerTaskKey, result: Result<OciPackedImage>) {
let Some(task) = self.tasks.lock().await.remove(key) else {
error!("packer task {} was not found when task completed", key);
return;
};
match result.as_ref() {
Ok(_) => {
info!("completed packer task {}", key);
}
Err(err) => {
warn!("packer task {} failed: {}", key, err);
}
}
task.watch.send_replace(Some(result));
}
fn maybe_cancel_task(self, key: OciPackerTaskKey) {
tokio::task::spawn(async move {
let tasks = self.tasks.lock().await;
if let Some(task) = tasks.get(&key) {
if task.watch.is_closed() {
task.task.abort();
}
}
});
}
fn ensure_task_gone(self, key: OciPackerTaskKey) {
tokio::task::spawn(async move {
let mut tasks = self.tasks.lock().await;
if let Some(task) = tasks.remove(&key) {
warn!("aborted packer task {}", key);
task.watch.send_replace(Some(Err(anyhow!("task aborted"))));
}
});
}
}
#[derive(Debug, Clone, Eq, PartialEq, Hash)]
struct OciPackerTaskKey {
digest: String,
format: OciPackedFormat,
}
impl Display for OciPackerTaskKey {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_fmt(format_args!("{}:{}", self.digest, self.format.extension()))
}
}

238
crates/oci/src/progress.rs Normal file
View File

@ -0,0 +1,238 @@
use indexmap::IndexMap;
use std::sync::Arc;
use tokio::{
sync::{watch, Mutex},
task::JoinHandle,
};
#[derive(Clone, Debug)]
pub struct OciProgress {
pub phase: OciProgressPhase,
pub digest: Option<String>,
pub layers: IndexMap<String, OciProgressLayer>,
pub indication: OciProgressIndication,
}
impl Default for OciProgress {
fn default() -> Self {
Self::new()
}
}
impl OciProgress {
pub fn new() -> Self {
OciProgress {
phase: OciProgressPhase::Started,
digest: None,
layers: IndexMap::new(),
indication: OciProgressIndication::Hidden,
}
}
pub fn start_resolving(&mut self) {
self.phase = OciProgressPhase::Resolving;
self.indication = OciProgressIndication::Spinner { message: None };
}
pub fn resolved(&mut self, digest: &str) {
self.digest = Some(digest.to_string());
self.indication = OciProgressIndication::Hidden;
}
pub fn add_layer(&mut self, id: &str) {
self.layers.insert(
id.to_string(),
OciProgressLayer {
id: id.to_string(),
phase: OciProgressLayerPhase::Waiting,
indication: OciProgressIndication::Spinner { message: None },
},
);
}
pub fn downloading_layer(&mut self, id: &str, downloaded: u64, total: u64) {
if let Some(entry) = self.layers.get_mut(id) {
entry.phase = OciProgressLayerPhase::Downloading;
entry.indication = OciProgressIndication::ProgressBar {
message: None,
current: downloaded,
total,
bytes: true,
};
}
}
pub fn downloaded_layer(&mut self, id: &str, total: u64) {
if let Some(entry) = self.layers.get_mut(id) {
entry.phase = OciProgressLayerPhase::Downloaded;
entry.indication = OciProgressIndication::Completed {
message: None,
total: Some(total),
bytes: true,
};
}
}
pub fn start_assemble(&mut self) {
self.phase = OciProgressPhase::Assemble;
self.indication = OciProgressIndication::Hidden;
}
pub fn start_extracting_layer(&mut self, id: &str) {
if let Some(entry) = self.layers.get_mut(id) {
entry.phase = OciProgressLayerPhase::Extracting;
entry.indication = OciProgressIndication::Spinner { message: None };
}
}
pub fn extracting_layer(&mut self, id: &str, file: &str) {
if let Some(entry) = self.layers.get_mut(id) {
entry.phase = OciProgressLayerPhase::Extracting;
entry.indication = OciProgressIndication::Spinner {
message: Some(file.to_string()),
};
}
}
pub fn extracted_layer(&mut self, id: &str, count: u64, total_size: u64) {
if let Some(entry) = self.layers.get_mut(id) {
entry.phase = OciProgressLayerPhase::Extracted;
entry.indication = OciProgressIndication::Completed {
message: Some(format!("{} files", count)),
total: Some(total_size),
bytes: true,
};
}
}
pub fn start_packing(&mut self) {
self.phase = OciProgressPhase::Pack;
for layer in self.layers.values_mut() {
layer.indication = OciProgressIndication::Hidden;
}
self.indication = OciProgressIndication::Spinner { message: None };
}
pub fn complete(&mut self, size: u64) {
self.phase = OciProgressPhase::Complete;
self.indication = OciProgressIndication::Completed {
message: None,
total: Some(size),
bytes: true,
}
}
}
#[derive(Clone, Debug)]
pub enum OciProgressPhase {
Started,
Resolving,
Resolved,
ConfigDownload,
LayerDownload,
Assemble,
Pack,
Complete,
}
#[derive(Clone, Debug)]
pub enum OciProgressIndication {
Hidden,
ProgressBar {
message: Option<String>,
current: u64,
total: u64,
bytes: bool,
},
Spinner {
message: Option<String>,
},
Completed {
message: Option<String>,
total: Option<u64>,
bytes: bool,
},
}
#[derive(Clone, Debug)]
pub struct OciProgressLayer {
pub id: String,
pub phase: OciProgressLayerPhase,
pub indication: OciProgressIndication,
}
#[derive(Clone, Debug)]
pub enum OciProgressLayerPhase {
Waiting,
Downloading,
Downloaded,
Extracting,
Extracted,
}
#[derive(Clone)]
pub struct OciProgressContext {
sender: watch::Sender<OciProgress>,
}
impl OciProgressContext {
pub fn create() -> (OciProgressContext, watch::Receiver<OciProgress>) {
let (sender, receiver) = watch::channel(OciProgress::new());
(OciProgressContext::new(sender), receiver)
}
pub fn new(sender: watch::Sender<OciProgress>) -> OciProgressContext {
OciProgressContext { sender }
}
pub fn update(&self, progress: &OciProgress) {
let _ = self.sender.send(progress.clone());
}
pub fn subscribe(&self) -> watch::Receiver<OciProgress> {
self.sender.subscribe()
}
}
#[derive(Clone)]
pub struct OciBoundProgress {
context: OciProgressContext,
instance: Arc<Mutex<OciProgress>>,
}
impl OciBoundProgress {
pub fn new(context: OciProgressContext, progress: OciProgress) -> OciBoundProgress {
OciBoundProgress {
context,
instance: Arc::new(Mutex::new(progress)),
}
}
pub async fn update(&self, function: impl FnOnce(&mut OciProgress)) {
let mut progress = self.instance.lock().await;
function(&mut progress);
self.context.update(&progress);
}
pub fn update_blocking(&self, function: impl FnOnce(&mut OciProgress)) {
let mut progress = self.instance.blocking_lock();
function(&mut progress);
self.context.update(&progress);
}
pub async fn also_update(&self, context: OciProgressContext) -> JoinHandle<()> {
let progress = self.instance.lock().await.clone();
context.update(&progress);
let mut receiver = self.context.subscribe();
tokio::task::spawn(async move {
while (receiver.changed().await).is_ok() {
context
.sender
.send_replace(receiver.borrow_and_update().clone());
}
})
}
}

View File

@ -7,26 +7,28 @@ use reqwest::{Client, RequestBuilder, Response, StatusCode};
use tokio::{fs::File, io::AsyncWriteExt}; use tokio::{fs::File, io::AsyncWriteExt};
use url::Url; use url::Url;
use crate::{name::ImageName, progress::OciBoundProgress, schema::OciSchema};
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct OciRegistryPlatform { pub struct OciPlatform {
pub os: Os, pub os: Os,
pub arch: Arch, pub arch: Arch,
} }
impl OciRegistryPlatform { impl OciPlatform {
#[cfg(target_arch = "x86_64")] #[cfg(target_arch = "x86_64")]
const CURRENT_ARCH: Arch = Arch::Amd64; const CURRENT_ARCH: Arch = Arch::Amd64;
#[cfg(target_arch = "aarch64")] #[cfg(target_arch = "aarch64")]
const CURRENT_ARCH: Arch = Arch::ARM64; const CURRENT_ARCH: Arch = Arch::ARM64;
pub fn new(os: Os, arch: Arch) -> OciRegistryPlatform { pub fn new(os: Os, arch: Arch) -> OciPlatform {
OciRegistryPlatform { os, arch } OciPlatform { os, arch }
} }
pub fn current() -> OciRegistryPlatform { pub fn current() -> OciPlatform {
OciRegistryPlatform { OciPlatform {
os: Os::Linux, os: Os::Linux,
arch: OciRegistryPlatform::CURRENT_ARCH, arch: OciPlatform::CURRENT_ARCH,
} }
} }
} }
@ -34,12 +36,12 @@ impl OciRegistryPlatform {
pub struct OciRegistryClient { pub struct OciRegistryClient {
agent: Client, agent: Client,
url: Url, url: Url,
platform: OciRegistryPlatform, platform: OciPlatform,
token: Option<String>, token: Option<String>,
} }
impl OciRegistryClient { impl OciRegistryClient {
pub fn new(url: Url, platform: OciRegistryPlatform) -> Result<OciRegistryClient> { pub fn new(url: Url, platform: OciPlatform) -> Result<OciRegistryClient> {
Ok(OciRegistryClient { Ok(OciRegistryClient {
agent: Client::new(), agent: Client::new(),
url, url,
@ -138,6 +140,7 @@ impl OciRegistryClient {
name: N, name: N,
descriptor: &Descriptor, descriptor: &Descriptor,
mut dest: File, mut dest: File,
progress: Option<OciBoundProgress>,
) -> Result<u64> { ) -> Result<u64> {
let url = self.url.join(&format!( let url = self.url.join(&format!(
"/v2/{}/blobs/{}", "/v2/{}/blobs/{}",
@ -149,6 +152,18 @@ impl OciRegistryClient {
while let Some(chunk) = response.chunk().await? { while let Some(chunk) = response.chunk().await? {
dest.write_all(&chunk).await?; dest.write_all(&chunk).await?;
size += chunk.len() as u64; size += chunk.len() as u64;
if let Some(ref progress) = progress {
progress
.update(|progress| {
progress.downloading_layer(
descriptor.digest(),
size,
descriptor.size() as u64,
);
})
.await;
}
} }
Ok(size) Ok(size)
} }
@ -157,11 +172,11 @@ impl OciRegistryClient {
&mut self, &mut self,
name: N, name: N,
reference: R, reference: R,
) -> Result<(ImageManifest, String)> { ) -> Result<(OciSchema<ImageManifest>, String)> {
let url = self.url.join(&format!( let url = self.url.join(&format!(
"/v2/{}/manifests/{}", "/v2/{}/manifests/{}",
name.as_ref(), name.as_ref(),
reference.as_ref() reference.as_ref(),
))?; ))?;
let accept = format!( let accept = format!(
"{}, {}, {}, {}", "{}, {}, {}, {}",
@ -179,20 +194,28 @@ impl OciRegistryClient {
.ok_or_else(|| anyhow!("fetching manifest did not yield a content digest"))? .ok_or_else(|| anyhow!("fetching manifest did not yield a content digest"))?
.to_str()? .to_str()?
.to_string(); .to_string();
let manifest = serde_json::from_str(&response.text().await?)?; let bytes = response.bytes().await?;
Ok((manifest, digest)) let manifest = serde_json::from_slice(&bytes)?;
Ok((OciSchema::new(bytes.to_vec(), manifest), digest))
} }
pub async fn get_manifest_with_digest<N: AsRef<str>, R: AsRef<str>>( pub async fn get_manifest_with_digest<N: AsRef<str>, R: AsRef<str>>(
&mut self, &mut self,
name: N, name: N,
reference: R, reference: Option<R>,
) -> Result<(ImageManifest, String)> { digest: Option<N>,
let url = self.url.join(&format!( ) -> Result<(OciSchema<ImageManifest>, Option<Descriptor>, String)> {
"/v2/{}/manifests/{}", let what = digest
name.as_ref(), .as_ref()
reference.as_ref() .map(|x| x.as_ref().to_string())
))?; .unwrap_or_else(|| {
reference
.map(|x| x.as_ref().to_string())
.unwrap_or_else(|| ImageName::DEFAULT_IMAGE_TAG.to_string())
});
let url = self
.url
.join(&format!("/v2/{}/manifests/{}", name.as_ref(), what,))?;
let accept = format!( let accept = format!(
"{}, {}, {}, {}", "{}, {}, {}, {}",
MediaType::ImageManifest.to_docker_v2s2()?, MediaType::ImageManifest.to_docker_v2s2()?,
@ -215,18 +238,21 @@ impl OciRegistryClient {
let descriptor = self let descriptor = self
.pick_manifest(index) .pick_manifest(index)
.ok_or_else(|| anyhow!("unable to pick manifest from index"))?; .ok_or_else(|| anyhow!("unable to pick manifest from index"))?;
return self let (manifest, digest) = self
.get_raw_manifest_with_digest(name, descriptor.digest()) .get_raw_manifest_with_digest(name, descriptor.digest())
.await; .await?;
return Ok((manifest, Some(descriptor), digest));
} }
let digest = response let digest = response
.headers() .headers()
.get("Docker-Content-Digest") .get("Docker-Content-Digest")
.ok_or_else(|| anyhow!("fetching manifest did not yield a content digest"))? .and_then(|x| x.to_str().ok())
.to_str()? .map(|x| x.to_string())
.to_string(); .or_else(|| digest.map(|x: N| x.as_ref().to_string()))
let manifest = serde_json::from_str(&response.text().await?)?; .ok_or_else(|| anyhow!("fetching manifest did not yield a content digest"))?;
Ok((manifest, digest)) let bytes = response.bytes().await?;
let manifest = serde_json::from_slice(&bytes)?;
Ok((OciSchema::new(bytes.to_vec(), manifest), None, digest))
} }
fn pick_manifest(&mut self, index: ImageIndex) -> Option<Descriptor> { fn pick_manifest(&mut self, index: ImageIndex) -> Option<Descriptor> {

29
crates/oci/src/schema.rs Normal file
View File

@ -0,0 +1,29 @@
use std::fmt::Debug;
#[derive(Clone, Debug)]
pub struct OciSchema<T: Clone + Debug> {
raw: Vec<u8>,
item: T,
}
impl<T: Clone + Debug> OciSchema<T> {
pub fn new(raw: Vec<u8>, item: T) -> OciSchema<T> {
OciSchema { raw, item }
}
pub fn raw(&self) -> &[u8] {
&self.raw
}
pub fn item(&self) -> &T {
&self.item
}
pub fn into_raw(self) -> Vec<u8> {
self.raw
}
pub fn into_item(self) -> T {
self.item
}
}

264
crates/oci/src/vfs.rs Normal file
View File

@ -0,0 +1,264 @@
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Result};
use tokio::{
fs::File,
io::{AsyncRead, AsyncWrite, AsyncWriteExt},
};
use tokio_tar::{Builder, Entry, EntryType, Header};
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum VfsNodeType {
Directory,
RegularFile,
Symlink,
Hardlink,
Fifo,
CharDevice,
BlockDevice,
}
#[derive(Clone, Debug)]
pub struct VfsNode {
pub name: String,
pub size: u64,
pub children: Vec<VfsNode>,
pub typ: VfsNodeType,
pub uid: u64,
pub gid: u64,
pub link_name: Option<String>,
pub mode: u32,
pub mtime: u64,
pub dev_major: Option<u32>,
pub dev_minor: Option<u32>,
pub disk_path: Option<PathBuf>,
}
impl VfsNode {
pub fn from<X: AsyncRead + Unpin>(entry: &Entry<X>) -> Result<VfsNode> {
let header = entry.header();
let name = entry
.path()?
.file_name()
.ok_or(anyhow!("unable to get file name for entry"))?
.to_string_lossy()
.to_string();
let typ = header.entry_type();
let vtype = if typ.is_symlink() {
VfsNodeType::Symlink
} else if typ.is_hard_link() {
VfsNodeType::Hardlink
} else if typ.is_dir() {
VfsNodeType::Directory
} else if typ.is_fifo() {
VfsNodeType::Fifo
} else if typ.is_block_special() {
VfsNodeType::BlockDevice
} else if typ.is_character_special() {
VfsNodeType::CharDevice
} else if typ.is_file() {
VfsNodeType::RegularFile
} else {
return Err(anyhow!("unable to determine vfs type for entry"));
};
Ok(VfsNode {
name,
size: header.size()?,
children: vec![],
typ: vtype,
uid: header.uid()?,
gid: header.gid()?,
link_name: header.link_name()?.map(|x| x.to_string_lossy().to_string()),
mode: header.mode()?,
mtime: header.mtime()?,
dev_major: header.device_major()?,
dev_minor: header.device_minor()?,
disk_path: None,
})
}
pub fn lookup(&self, path: &Path) -> Option<&VfsNode> {
let mut node = self;
for part in path {
node = node
.children
.iter()
.find(|child| child.name == part.to_string_lossy())?;
}
Some(node)
}
pub fn lookup_mut(&mut self, path: &Path) -> Option<&mut VfsNode> {
let mut node = self;
for part in path {
node = node
.children
.iter_mut()
.find(|child| child.name == part.to_string_lossy())?;
}
Some(node)
}
pub fn remove(&mut self, path: &Path) -> Option<(&mut VfsNode, VfsNode)> {
let parent = path.parent()?;
let node = self.lookup_mut(parent)?;
let file_name = path.file_name()?;
let file_name = file_name.to_string_lossy();
let position = node
.children
.iter()
.position(|child| file_name == child.name)?;
let removed = node.children.remove(position);
Some((node, removed))
}
pub fn create_tar_header(&self) -> Result<Header> {
let mut header = Header::new_ustar();
header.set_entry_type(match self.typ {
VfsNodeType::Directory => EntryType::Directory,
VfsNodeType::CharDevice => EntryType::Char,
VfsNodeType::BlockDevice => EntryType::Block,
VfsNodeType::Fifo => EntryType::Fifo,
VfsNodeType::Hardlink => EntryType::Link,
VfsNodeType::Symlink => EntryType::Symlink,
VfsNodeType::RegularFile => EntryType::Regular,
});
header.set_uid(self.uid);
header.set_gid(self.gid);
if let Some(device_major) = self.dev_major {
header.set_device_major(device_major)?;
}
if let Some(device_minor) = self.dev_minor {
header.set_device_minor(device_minor)?;
}
header.set_mtime(self.mtime);
header.set_mode(self.mode);
if let Some(link_name) = self.link_name.as_ref() {
header.set_link_name(&PathBuf::from(link_name))?;
}
header.set_size(self.size);
Ok(header)
}
pub async fn write_to_tar<W: AsyncWrite + Unpin + Send>(
&self,
path: &Path,
builder: &mut Builder<W>,
) -> Result<()> {
let mut header = self.create_tar_header()?;
header.set_path(path)?;
header.set_cksum();
if let Some(disk_path) = self.disk_path.as_ref() {
builder
.append(&header, File::open(disk_path).await?)
.await?;
} else {
builder.append(&header, &[] as &[u8]).await?;
}
Ok(())
}
}
#[derive(Clone, Debug)]
pub struct VfsTree {
pub root: VfsNode,
}
impl Default for VfsTree {
fn default() -> Self {
Self::new()
}
}
impl VfsTree {
pub fn new() -> VfsTree {
VfsTree {
root: VfsNode {
name: "".to_string(),
size: 0,
children: vec![],
typ: VfsNodeType::Directory,
uid: 0,
gid: 0,
link_name: None,
mode: 0,
mtime: 0,
dev_major: None,
dev_minor: None,
disk_path: None,
},
}
}
pub fn insert_tar_entry<X: AsyncRead + Unpin>(&mut self, entry: &Entry<X>) -> Result<&VfsNode> {
let mut meta = VfsNode::from(entry)?;
let path = entry.path()?.to_path_buf();
let parent = if let Some(parent) = path.parent() {
self.root.lookup_mut(parent)
} else {
Some(&mut self.root)
};
let Some(parent) = parent else {
return Err(anyhow!("unable to find parent of entry"));
};
let position = parent
.children
.iter()
.position(|child| meta.name == child.name);
if let Some(position) = position {
let old = parent.children.remove(position);
if meta.typ == VfsNodeType::Directory {
meta.children = old.children;
}
}
parent.children.push(meta.clone());
let Some(reference) = parent.children.iter().find(|child| child.name == meta.name) else {
return Err(anyhow!("unable to find inserted child in vfs"));
};
Ok(reference)
}
pub fn set_disk_path(&mut self, path: &Path, disk_path: &Path) -> Result<()> {
let Some(node) = self.root.lookup_mut(path) else {
return Err(anyhow!(
"unable to find node {:?} to set disk path to",
path
));
};
node.disk_path = Some(disk_path.to_path_buf());
Ok(())
}
pub async fn write_to_tar<W: AsyncWrite + Unpin + Send + 'static>(
&self,
write: W,
) -> Result<()> {
let mut builder = Builder::new(write);
let mut queue = vec![(PathBuf::from(""), &self.root)];
while !queue.is_empty() {
let (mut path, node) = queue.remove(0);
if !node.name.is_empty() {
path.push(&node.name);
}
if path.components().count() != 0 {
node.write_to_tar(&path, &mut builder).await?;
}
for child in &node.children {
queue.push((path.clone(), child));
}
}
let mut write = builder.into_inner().await?;
write.flush().await?;
drop(write);
Ok(())
}
}

View File

@ -12,18 +12,18 @@ resolver = "2"
anyhow = { workspace = true } anyhow = { workspace = true }
backhand = { workspace = true } backhand = { workspace = true }
ipnetwork = { workspace = true } ipnetwork = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" } krata = { path = "../krata", version = "^0.0.10" }
krata-advmac = { workspace = true } krata-advmac = { workspace = true }
krata-oci = { path = "../oci", version = "^0.0.7" } krata-oci = { path = "../oci", version = "^0.0.10" }
log = { workspace = true } log = { workspace = true }
loopdev-3 = { workspace = true } loopdev-3 = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
uuid = { workspace = true } uuid = { workspace = true }
krata-xenclient = { path = "../xen/xenclient", version = "^0.0.7" } krata-xenclient = { path = "../xen/xenclient", version = "^0.0.10" }
krata-xenevtchn = { path = "../xen/xenevtchn", version = "^0.0.7" } krata-xenevtchn = { path = "../xen/xenevtchn", version = "^0.0.10" }
krata-xengnt = { path = "../xen/xengnt", version = "^0.0.7" } krata-xengnt = { path = "../xen/xengnt", version = "^0.0.10" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.7" } krata-xenstore = { path = "../xen/xenstore", version = "^0.0.10" }
[lib] [lib]
name = "kratart" name = "kratart"
@ -31,10 +31,6 @@ name = "kratart"
[dev-dependencies] [dev-dependencies]
env_logger = { workspace = true } env_logger = { workspace = true }
[[example]]
name = "kratart-squashify"
path = "examples/squashify.rs"
[[example]] [[example]]
name = "kratart-channel" name = "kratart-channel"
path = "examples/channel.rs" path = "examples/channel.rs"

View File

@ -1,29 +0,0 @@
use std::{env::args, path::PathBuf};
use anyhow::Result;
use env_logger::Env;
use krataoci::{cache::ImageCache, compiler::ImageCompiler, name::ImageName};
use tokio::fs;
#[tokio::main]
async fn main() -> Result<()> {
env_logger::Builder::from_env(Env::default().default_filter_or("info")).init();
let image = ImageName::parse(&args().nth(1).unwrap())?;
let seed = args().nth(2).map(PathBuf::from);
let cache_dir = PathBuf::from("krata-cache");
if !cache_dir.exists() {
fs::create_dir(&cache_dir).await?;
}
let cache = ImageCache::new(&cache_dir)?;
let compiler = ImageCompiler::new(&cache, seed)?;
let info = compiler.compile(&image).await?;
println!(
"generated squashfs of {} to {}",
image,
info.image_squashfs.to_string_lossy()
);
Ok(())
}

View File

@ -1,7 +1,7 @@
use anyhow::Result; use anyhow::Result;
use backhand::{FilesystemWriter, NodeHeader}; use backhand::{FilesystemWriter, NodeHeader};
use krata::launchcfg::LaunchInfo; use krata::launchcfg::LaunchInfo;
use krataoci::compiler::ImageInfo; use krataoci::packer::OciPackedImage;
use log::trace; use log::trace;
use std::fs; use std::fs;
use std::fs::File; use std::fs::File;
@ -9,28 +9,24 @@ use std::path::PathBuf;
use uuid::Uuid; use uuid::Uuid;
pub struct ConfigBlock<'a> { pub struct ConfigBlock<'a> {
pub image_info: &'a ImageInfo, pub image: &'a OciPackedImage,
pub file: PathBuf, pub file: PathBuf,
pub dir: PathBuf, pub dir: PathBuf,
} }
impl ConfigBlock<'_> { impl ConfigBlock<'_> {
pub fn new<'a>(uuid: &Uuid, image_info: &'a ImageInfo) -> Result<ConfigBlock<'a>> { pub fn new<'a>(uuid: &Uuid, image: &'a OciPackedImage) -> Result<ConfigBlock<'a>> {
let mut dir = std::env::temp_dir().clone(); let mut dir = std::env::temp_dir().clone();
dir.push(format!("krata-cfg-{}", uuid)); dir.push(format!("krata-cfg-{}", uuid));
fs::create_dir_all(&dir)?; fs::create_dir_all(&dir)?;
let mut file = dir.clone(); let mut file = dir.clone();
file.push("config.squashfs"); file.push("config.squashfs");
Ok(ConfigBlock { Ok(ConfigBlock { image, file, dir })
image_info,
file,
dir,
})
} }
pub fn build(&self, launch_config: &LaunchInfo) -> Result<()> { pub fn build(&self, launch_config: &LaunchInfo) -> Result<()> {
trace!("build launch_config={:?}", launch_config); trace!("build launch_config={:?}", launch_config);
let manifest = self.image_info.config.to_string()?; let config = self.image.config.raw();
let launch = serde_json::to_string(launch_config)?; let launch = serde_json::to_string(launch_config)?;
let mut writer = FilesystemWriter::default(); let mut writer = FilesystemWriter::default();
writer.push_dir( writer.push_dir(
@ -43,7 +39,7 @@ impl ConfigBlock<'_> {
}, },
)?; )?;
writer.push_file( writer.push_file(
manifest.as_bytes(), config,
"/image/config.json", "/image/config.json",
NodeHeader { NodeHeader {
permissions: 384, permissions: 384,

View File

@ -48,7 +48,7 @@ pub struct ChannelService {
gnttab: GrantTab, gnttab: GrantTab,
input_receiver: Receiver<(u32, Vec<u8>)>, input_receiver: Receiver<(u32, Vec<u8>)>,
pub input_sender: Sender<(u32, Vec<u8>)>, pub input_sender: Sender<(u32, Vec<u8>)>,
output_sender: Sender<(u32, Vec<u8>)>, output_sender: Sender<(u32, Option<Vec<u8>>)>,
} }
impl ChannelService { impl ChannelService {
@ -58,7 +58,7 @@ impl ChannelService {
) -> Result<( ) -> Result<(
ChannelService, ChannelService,
Sender<(u32, Vec<u8>)>, Sender<(u32, Vec<u8>)>,
Receiver<(u32, Vec<u8>)>, Receiver<(u32, Option<Vec<u8>>)>,
)> { )> {
let (input_sender, input_receiver) = channel(GROUPED_CHANNEL_QUEUE_LEN); let (input_sender, input_receiver) = channel(GROUPED_CHANNEL_QUEUE_LEN);
let (output_sender, output_receiver) = channel(GROUPED_CHANNEL_QUEUE_LEN); let (output_sender, output_receiver) = channel(GROUPED_CHANNEL_QUEUE_LEN);
@ -203,12 +203,14 @@ pub struct ChannelBackend {
pub domid: u32, pub domid: u32,
pub id: u32, pub id: u32,
pub sender: Sender<Vec<u8>>, pub sender: Sender<Vec<u8>>,
raw_sender: Sender<(u32, Option<Vec<u8>>)>,
task: JoinHandle<()>, task: JoinHandle<()>,
} }
impl Drop for ChannelBackend { impl Drop for ChannelBackend {
fn drop(&mut self) { fn drop(&mut self) {
self.task.abort(); self.task.abort();
let _ = self.raw_sender.try_send((self.domid, None));
debug!( debug!(
"destroyed channel backend for domain {} channel {}", "destroyed channel backend for domain {} channel {}",
self.domid, self.id self.domid, self.id
@ -226,7 +228,7 @@ impl ChannelBackend {
store: XsdClient, store: XsdClient,
evtchn: EventChannel, evtchn: EventChannel,
gnttab: GrantTab, gnttab: GrantTab,
output_sender: Sender<(u32, Vec<u8>)>, output_sender: Sender<(u32, Option<Vec<u8>>)>,
use_reserved_ref: Option<u64>, use_reserved_ref: Option<u64>,
) -> Result<ChannelBackend> { ) -> Result<ChannelBackend> {
let processor = KrataChannelBackendProcessor { let processor = KrataChannelBackendProcessor {
@ -242,11 +244,14 @@ impl ChannelBackend {
let (input_sender, input_receiver) = channel(SINGLE_CHANNEL_QUEUE_LEN); let (input_sender, input_receiver) = channel(SINGLE_CHANNEL_QUEUE_LEN);
let task = processor.launch(output_sender, input_receiver).await?; let task = processor
.launch(output_sender.clone(), input_receiver)
.await?;
Ok(ChannelBackend { Ok(ChannelBackend {
domid, domid,
id, id,
task, task,
raw_sender: output_sender,
sender: input_sender, sender: input_sender,
}) })
} }
@ -304,7 +309,7 @@ impl KrataChannelBackendProcessor {
async fn launch( async fn launch(
&self, &self,
output_sender: Sender<(u32, Vec<u8>)>, output_sender: Sender<(u32, Option<Vec<u8>>)>,
input_receiver: Receiver<Vec<u8>>, input_receiver: Receiver<Vec<u8>>,
) -> Result<JoinHandle<()>> { ) -> Result<JoinHandle<()>> {
let owned = self.clone(); let owned = self.clone();
@ -321,7 +326,7 @@ impl KrataChannelBackendProcessor {
async fn processor( async fn processor(
&self, &self,
sender: Sender<(u32, Vec<u8>)>, sender: Sender<(u32, Option<Vec<u8>>)>,
mut receiver: Receiver<Vec<u8>>, mut receiver: Receiver<Vec<u8>>,
) -> Result<()> { ) -> Result<()> {
self.init().await?; self.init().await?;
@ -396,7 +401,7 @@ impl KrataChannelBackendProcessor {
unsafe { unsafe {
let buffer = self.read_output_buffer(channel.local_port, &memory).await?; let buffer = self.read_output_buffer(channel.local_port, &memory).await?;
if !buffer.is_empty() { if !buffer.is_empty() {
sender.send((self.domid, buffer)).await?; sender.send((self.domid, Some(buffer))).await?;
} }
}; };
@ -443,6 +448,10 @@ impl KrataChannelBackendProcessor {
error!("channel for domid {} has an invalid input space of {}", self.domid, space); error!("channel for domid {} has an invalid input space of {}", self.domid, space);
} }
let free = XenConsoleInterface::INPUT_SIZE.wrapping_sub(space); let free = XenConsoleInterface::INPUT_SIZE.wrapping_sub(space);
if free == 0 {
sleep(Duration::from_micros(100)).await;
continue;
}
let want = data.len().min(free); let want = data.len().min(free);
let buffer = &data[index..want]; let buffer = &data[index..want];
for b in buffer { for b in buffer {
@ -466,7 +475,7 @@ impl KrataChannelBackendProcessor {
unsafe { unsafe {
let buffer = self.read_output_buffer(channel.local_port, &memory).await?; let buffer = self.read_output_buffer(channel.local_port, &memory).await?;
if !buffer.is_empty() { if !buffer.is_empty() {
sender.send((self.domid, buffer)).await?; sender.send((self.domid, Some(buffer))).await?;
} }
}; };
channel.unmask_sender.send(channel.local_port).await?; channel.unmask_sender.send(channel.local_port).await?;

View File

@ -1,18 +0,0 @@
use anyhow::Result;
use tokio::fs::File;
pub struct XenConsole {
pub read_handle: File,
pub write_handle: File,
}
impl XenConsole {
pub async fn new(tty: &str) -> Result<XenConsole> {
let read_handle = File::options().read(true).write(false).open(tty).await?;
let write_handle = File::options().read(false).write(true).open(tty).await?;
Ok(XenConsole {
read_handle,
write_handle,
})
}
}

View File

@ -8,7 +8,9 @@ use anyhow::{anyhow, Result};
use ipnetwork::{IpNetwork, Ipv4Network}; use ipnetwork::{IpNetwork, Ipv4Network};
use krata::launchcfg::{ use krata::launchcfg::{
LaunchInfo, LaunchNetwork, LaunchNetworkIpv4, LaunchNetworkIpv6, LaunchNetworkResolver, LaunchInfo, LaunchNetwork, LaunchNetworkIpv4, LaunchNetworkIpv6, LaunchNetworkResolver,
LaunchPackedFormat, LaunchRoot,
}; };
use krataoci::packer::OciPackedImage;
use tokio::sync::Semaphore; use tokio::sync::Semaphore;
use uuid::Uuid; use uuid::Uuid;
use xenclient::{DomainChannel, DomainConfig, DomainDisk, DomainNetworkInterface}; use xenclient::{DomainChannel, DomainConfig, DomainDisk, DomainNetworkInterface};
@ -16,23 +18,21 @@ use xenstore::XsdInterface;
use crate::cfgblk::ConfigBlock; use crate::cfgblk::ConfigBlock;
use crate::RuntimeContext; use crate::RuntimeContext;
use krataoci::{
cache::ImageCache,
compiler::{ImageCompiler, ImageInfo},
name::ImageName,
};
use super::{GuestInfo, GuestState}; use super::{GuestInfo, GuestState};
pub struct GuestLaunchRequest<'a> { pub struct GuestLaunchRequest {
pub format: LaunchPackedFormat,
pub kernel: Vec<u8>,
pub initrd: Vec<u8>,
pub uuid: Option<Uuid>, pub uuid: Option<Uuid>,
pub name: Option<&'a str>, pub name: Option<String>,
pub image: &'a str,
pub vcpus: u32, pub vcpus: u32,
pub mem: u64, pub mem: u64,
pub env: HashMap<String, String>, pub env: HashMap<String, String>,
pub run: Option<Vec<String>>, pub run: Option<Vec<String>>,
pub debug: bool, pub debug: bool,
pub image: OciPackedImage,
} }
pub struct GuestLauncher { pub struct GuestLauncher {
@ -44,15 +44,13 @@ impl GuestLauncher {
Ok(Self { launch_semaphore }) Ok(Self { launch_semaphore })
} }
pub async fn launch<'r>( pub async fn launch(
&mut self, &mut self,
context: &RuntimeContext, context: &RuntimeContext,
request: GuestLaunchRequest<'r>, request: GuestLaunchRequest,
) -> Result<GuestInfo> { ) -> Result<GuestInfo> {
let uuid = request.uuid.unwrap_or_else(Uuid::new_v4); let uuid = request.uuid.unwrap_or_else(Uuid::new_v4);
let xen_name = format!("krata-{uuid}"); let xen_name = format!("krata-{uuid}");
let image_info = self.compile(request.image, &context.image_cache).await?;
let mut gateway_mac = MacAddr6::random(); let mut gateway_mac = MacAddr6::random();
gateway_mac.set_local(true); gateway_mac.set_local(true);
gateway_mac.set_multicast(false); gateway_mac.set_multicast(false);
@ -69,9 +67,13 @@ impl GuestLauncher {
let ipv6_network_mask: u32 = 10; let ipv6_network_mask: u32 = 10;
let launch_config = LaunchInfo { let launch_config = LaunchInfo {
root: LaunchRoot {
format: request.format.clone(),
},
hostname: Some( hostname: Some(
request request
.name .name
.as_ref()
.map(|x| x.to_string()) .map(|x| x.to_string())
.unwrap_or_else(|| format!("krata-{}", uuid)), .unwrap_or_else(|| format!("krata-{}", uuid)),
), ),
@ -98,13 +100,14 @@ impl GuestLauncher {
run: request.run, run: request.run,
}; };
let cfgblk = ConfigBlock::new(&uuid, &image_info)?; let cfgblk = ConfigBlock::new(&uuid, &request.image)?;
cfgblk.build(&launch_config)?; cfgblk.build(&launch_config)?;
let image_squashfs_path = image_info let image_squashfs_path = request
.image_squashfs .image
.path
.to_str() .to_str()
.ok_or_else(|| anyhow!("failed to convert image squashfs path to string"))?; .ok_or_else(|| anyhow!("failed to convert image path to string"))?;
let cfgblk_dir_path = cfgblk let cfgblk_dir_path = cfgblk
.dir .dir
@ -140,7 +143,6 @@ impl GuestLauncher {
cfgblk_dir_path, cfgblk_dir_path,
), ),
), ),
("krata/image".to_string(), request.image.to_string()),
( (
"krata/network/guest/ipv4".to_string(), "krata/network/guest/ipv4".to_string(),
format!("{}/{}", guest_ipv4, ipv4_network_mask), format!("{}/{}", guest_ipv4, ipv4_network_mask),
@ -167,28 +169,28 @@ impl GuestLauncher {
), ),
]; ];
if let Some(name) = request.name { if let Some(name) = request.name.as_ref() {
extra_keys.push(("krata/name".to_string(), name.to_string())); extra_keys.push(("krata/name".to_string(), name.clone()));
} }
let config = DomainConfig { let config = DomainConfig {
backend_domid: 0, backend_domid: 0,
name: &xen_name, name: xen_name,
max_vcpus: request.vcpus, max_vcpus: request.vcpus,
mem_mb: request.mem, mem_mb: request.mem,
kernel_path: &context.kernel, kernel: request.kernel,
initrd_path: &context.initrd, initrd: request.initrd,
cmdline: &cmdline, cmdline,
use_console_backend: Some("krata-console"), use_console_backend: Some("krata-console".to_string()),
disks: vec![ disks: vec![
DomainDisk { DomainDisk {
vdev: "xvda", vdev: "xvda".to_string(),
block: &image_squashfs_loop, block: image_squashfs_loop.clone(),
writable: false, writable: false,
}, },
DomainDisk { DomainDisk {
vdev: "xvdb", vdev: "xvdb".to_string(),
block: &cfgblk_squashfs_loop, block: cfgblk_squashfs_loop.clone(),
writable: false, writable: false,
}, },
], ],
@ -197,7 +199,7 @@ impl GuestLauncher {
initialized: false, initialized: false,
}], }],
vifs: vec![DomainNetworkInterface { vifs: vec![DomainNetworkInterface {
mac: &guest_mac_string, mac: guest_mac_string.clone(),
mtu: 1500, mtu: 1500,
bridge: None, bridge: None,
script: None, script: None,
@ -209,10 +211,10 @@ impl GuestLauncher {
}; };
match context.xen.create(&config).await { match context.xen.create(&config).await {
Ok(created) => Ok(GuestInfo { Ok(created) => Ok(GuestInfo {
name: request.name.map(|x| x.to_string()), name: request.name.as_ref().map(|x| x.to_string()),
uuid, uuid,
domid: created.domid, domid: created.domid,
image: request.image.to_string(), image: request.image.digest,
loops: vec![], loops: vec![],
guest_ipv4: Some(IpNetwork::new( guest_ipv4: Some(IpNetwork::new(
IpAddr::V4(guest_ipv4), IpAddr::V4(guest_ipv4),
@ -243,12 +245,6 @@ impl GuestLauncher {
} }
} }
async fn compile(&self, image: &str, image_cache: &ImageCache) -> Result<ImageInfo> {
let image = ImageName::parse(image)?;
let compiler = ImageCompiler::new(image_cache, None)?;
compiler.compile(&image).await
}
async fn allocate_ipv4(&self, context: &RuntimeContext) -> Result<Ipv4Addr> { async fn allocate_ipv4(&self, context: &RuntimeContext) -> Result<Ipv4Addr> {
let network = Ipv4Network::new(Ipv4Addr::new(10, 75, 80, 0), 24)?; let network = Ipv4Network::new(Ipv4Addr::new(10, 75, 80, 0), 24)?;
let mut used: Vec<Ipv4Addr> = vec![]; let mut used: Vec<Ipv4Addr> = vec![];

View File

@ -1,9 +1,4 @@
use std::{ use std::{fs, path::PathBuf, str::FromStr, sync::Arc};
fs,
path::{Path, PathBuf},
str::FromStr,
sync::Arc,
};
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Result};
use ipnetwork::IpNetwork; use ipnetwork::IpNetwork;
@ -15,15 +10,12 @@ use xenstore::{XsdClient, XsdInterface};
use self::{ use self::{
autoloop::AutoLoop, autoloop::AutoLoop,
console::XenConsole,
launch::{GuestLaunchRequest, GuestLauncher}, launch::{GuestLaunchRequest, GuestLauncher},
}; };
use krataoci::cache::ImageCache;
pub mod autoloop; pub mod autoloop;
pub mod cfgblk; pub mod cfgblk;
pub mod channel; pub mod channel;
pub mod console;
pub mod launch; pub mod launch;
pub struct GuestLoopInfo { pub struct GuestLoopInfo {
@ -53,48 +45,19 @@ pub struct GuestInfo {
#[derive(Clone)] #[derive(Clone)]
pub struct RuntimeContext { pub struct RuntimeContext {
pub image_cache: ImageCache,
pub autoloop: AutoLoop, pub autoloop: AutoLoop,
pub xen: XenClient, pub xen: XenClient,
pub kernel: String,
pub initrd: String,
} }
impl RuntimeContext { impl RuntimeContext {
pub async fn new(store: String) -> Result<Self> { pub async fn new() -> Result<Self> {
let mut image_cache_path = PathBuf::from(&store);
image_cache_path.push("cache");
fs::create_dir_all(&image_cache_path)?;
let xen = XenClient::open(0).await?; let xen = XenClient::open(0).await?;
image_cache_path.push("image");
fs::create_dir_all(&image_cache_path)?;
let image_cache = ImageCache::new(&image_cache_path)?;
let kernel = RuntimeContext::detect_guest_file(&store, "kernel")?;
let initrd = RuntimeContext::detect_guest_file(&store, "initrd")?;
Ok(RuntimeContext { Ok(RuntimeContext {
image_cache,
autoloop: AutoLoop::new(LoopControl::open()?), autoloop: AutoLoop::new(LoopControl::open()?),
xen, xen,
kernel,
initrd,
}) })
} }
fn detect_guest_file(store: &str, name: &str) -> Result<String> {
let mut path = PathBuf::from(format!("{}/guest/{}", store, name));
if path.is_file() {
return path_as_string(&path);
}
path = PathBuf::from(format!("/usr/share/krata/guest/{}", name));
if path.is_file() {
return path_as_string(&path);
}
Err(anyhow!("unable to find required guest file: {}", name))
}
pub async fn list(&self) -> Result<Vec<GuestInfo>> { pub async fn list(&self) -> Result<Vec<GuestInfo>> {
let mut guests: Vec<GuestInfo> = Vec::new(); let mut guests: Vec<GuestInfo> = Vec::new();
for domid_candidate in self.xen.store.list("/local/domain").await? { for domid_candidate in self.xen.store.list("/local/domain").await? {
@ -254,22 +217,20 @@ impl RuntimeContext {
#[derive(Clone)] #[derive(Clone)]
pub struct Runtime { pub struct Runtime {
store: Arc<String>,
context: RuntimeContext, context: RuntimeContext,
launch_semaphore: Arc<Semaphore>, launch_semaphore: Arc<Semaphore>,
} }
impl Runtime { impl Runtime {
pub async fn new(store: String) -> Result<Self> { pub async fn new() -> Result<Self> {
let context = RuntimeContext::new(store.clone()).await?; let context = RuntimeContext::new().await?;
Ok(Self { Ok(Self {
store: Arc::new(store),
context, context,
launch_semaphore: Arc::new(Semaphore::new(1)), launch_semaphore: Arc::new(Semaphore::new(1)),
}) })
} }
pub async fn launch<'a>(&self, request: GuestLaunchRequest<'a>) -> Result<GuestInfo> { pub async fn launch(&self, request: GuestLaunchRequest) -> Result<GuestInfo> {
let mut launcher = GuestLauncher::new(self.launch_semaphore.clone())?; let mut launcher = GuestLauncher::new(self.launch_semaphore.clone())?;
launcher.launch(&self.context, request).await launcher.launch(&self.context, request).await
} }
@ -321,28 +282,11 @@ impl Runtime {
Ok(uuid) Ok(uuid)
} }
pub async fn console(&self, uuid: Uuid) -> Result<XenConsole> {
let info = self
.context
.resolve(uuid)
.await?
.ok_or_else(|| anyhow!("unable to resolve guest: {}", uuid))?;
let domid = info.domid;
let tty = self.context.xen.get_console_path(domid).await?;
XenConsole::new(&tty).await
}
pub async fn list(&self) -> Result<Vec<GuestInfo>> { pub async fn list(&self) -> Result<Vec<GuestInfo>> {
self.context.list().await self.context.list().await
} }
pub async fn dupe(&self) -> Result<Runtime> { pub async fn dupe(&self) -> Result<Runtime> {
Runtime::new((*self.store).clone()).await Runtime::new().await
} }
} }
fn path_as_string(path: &Path) -> Result<String> {
path.to_str()
.ok_or_else(|| anyhow!("unable to convert path to string"))
.map(|x| x.to_string())
}

View File

@ -14,8 +14,8 @@ elf = { workspace = true }
flate2 = { workspace = true } flate2 = { workspace = true }
libc = { workspace = true } libc = { workspace = true }
log = { workspace = true } log = { workspace = true }
krata-xencall = { path = "../xencall", version = "^0.0.7" } krata-xencall = { path = "../xencall", version = "^0.0.10" }
krata-xenstore = { path = "../xenstore", version = "^0.0.7" } krata-xenstore = { path = "../xenstore", version = "^0.0.10" }
memchr = { workspace = true } memchr = { workspace = true }
nix = { workspace = true } nix = { workspace = true }
slice-copy = { workspace = true } slice-copy = { workspace = true }

View File

@ -1,4 +1,5 @@
use std::{env, process}; use std::{env, process};
use tokio::fs;
use xenclient::error::Result; use xenclient::error::Result;
use xenclient::{DomainConfig, XenClient}; use xenclient::{DomainConfig, XenClient};
@ -16,12 +17,12 @@ async fn main() -> Result<()> {
let client = XenClient::open(0).await?; let client = XenClient::open(0).await?;
let config = DomainConfig { let config = DomainConfig {
backend_domid: 0, backend_domid: 0,
name: "xenclient-test", name: "xenclient-test".to_string(),
max_vcpus: 1, max_vcpus: 1,
mem_mb: 512, mem_mb: 512,
kernel_path: kernel_image_path.as_str(), kernel: fs::read(&kernel_image_path).await?,
initrd_path: initrd_path.as_str(), initrd: fs::read(&initrd_path).await?,
cmdline: "debug elevator=noop", cmdline: "debug elevator=noop".to_string(),
use_console_backend: None, use_console_backend: None,
disks: vec![], disks: vec![],
channels: vec![], channels: vec![],

View File

@ -107,17 +107,15 @@ impl ElfImageLoader {
ElfImageLoader::load_xz(file.as_slice()) ElfImageLoader::load_xz(file.as_slice())
} }
pub fn load_file_kernel(path: &str) -> Result<ElfImageLoader> { pub fn load_file_kernel(data: &[u8]) -> Result<ElfImageLoader> {
let file = std::fs::read(path)?; for start in find_iter(data, &[0x1f, 0x8b]) {
if let Ok(elf) = ElfImageLoader::load_gz(&data[start..]) {
for start in find_iter(file.as_slice(), &[0x1f, 0x8b]) {
if let Ok(elf) = ElfImageLoader::load_gz(&file[start..]) {
return Ok(elf); return Ok(elf);
} }
} }
for start in find_iter(file.as_slice(), &[0xfd, 0x37, 0x7a, 0x58]) { for start in find_iter(data, &[0xfd, 0x37, 0x7a, 0x58]) {
if let Ok(elf) = ElfImageLoader::load_xz(&file[start..]) { if let Ok(elf) = ElfImageLoader::load_xz(&data[start..]) {
return Ok(elf); return Ok(elf);
} }
} }

View File

@ -23,7 +23,6 @@ use boot::BootState;
use log::{debug, trace, warn}; use log::{debug, trace, warn};
use tokio::time::timeout; use tokio::time::timeout;
use std::fs::read;
use std::path::PathBuf; use std::path::PathBuf;
use std::str::FromStr; use std::str::FromStr;
use std::time::Duration; use std::time::Duration;
@ -40,60 +39,60 @@ pub struct XenClient {
call: XenCall, call: XenCall,
} }
#[derive(Debug)] #[derive(Clone, Debug)]
pub struct BlockDeviceRef { pub struct BlockDeviceRef {
pub path: String, pub path: String,
pub major: u32, pub major: u32,
pub minor: u32, pub minor: u32,
} }
#[derive(Debug)] #[derive(Clone, Debug)]
pub struct DomainDisk<'a> { pub struct DomainDisk {
pub vdev: &'a str, pub vdev: String,
pub block: &'a BlockDeviceRef, pub block: BlockDeviceRef,
pub writable: bool, pub writable: bool,
} }
#[derive(Debug)] #[derive(Clone, Debug)]
pub struct DomainFilesystem<'a> { pub struct DomainFilesystem {
pub path: &'a str, pub path: String,
pub tag: &'a str, pub tag: String,
} }
#[derive(Debug)] #[derive(Clone, Debug)]
pub struct DomainNetworkInterface<'a> { pub struct DomainNetworkInterface {
pub mac: &'a str, pub mac: String,
pub mtu: u32, pub mtu: u32,
pub bridge: Option<&'a str>, pub bridge: Option<String>,
pub script: Option<&'a str>, pub script: Option<String>,
} }
#[derive(Debug)] #[derive(Clone, Debug)]
pub struct DomainChannel { pub struct DomainChannel {
pub typ: String, pub typ: String,
pub initialized: bool, pub initialized: bool,
} }
#[derive(Debug)] #[derive(Clone, Debug)]
pub struct DomainEventChannel<'a> { pub struct DomainEventChannel {
pub name: &'a str, pub name: String,
} }
#[derive(Debug)] #[derive(Clone, Debug)]
pub struct DomainConfig<'a> { pub struct DomainConfig {
pub backend_domid: u32, pub backend_domid: u32,
pub name: &'a str, pub name: String,
pub max_vcpus: u32, pub max_vcpus: u32,
pub mem_mb: u64, pub mem_mb: u64,
pub kernel_path: &'a str, pub kernel: Vec<u8>,
pub initrd_path: &'a str, pub initrd: Vec<u8>,
pub cmdline: &'a str, pub cmdline: String,
pub disks: Vec<DomainDisk<'a>>, pub disks: Vec<DomainDisk>,
pub use_console_backend: Option<&'a str>, pub use_console_backend: Option<String>,
pub channels: Vec<DomainChannel>, pub channels: Vec<DomainChannel>,
pub vifs: Vec<DomainNetworkInterface<'a>>, pub vifs: Vec<DomainNetworkInterface>,
pub filesystems: Vec<DomainFilesystem<'a>>, pub filesystems: Vec<DomainFilesystem>,
pub event_channels: Vec<DomainEventChannel<'a>>, pub event_channels: Vec<DomainEventChannel>,
pub extra_keys: Vec<(String, String)>, pub extra_keys: Vec<(String, String)>,
pub extra_rw_paths: Vec<String>, pub extra_rw_paths: Vec<String>,
} }
@ -117,7 +116,7 @@ impl XenClient {
Ok(XenClient { store, call }) Ok(XenClient { store, call })
} }
pub async fn create(&self, config: &DomainConfig<'_>) -> Result<CreatedDomain> { pub async fn create(&self, config: &DomainConfig) -> Result<CreatedDomain> {
let mut domain = CreateDomain { let mut domain = CreateDomain {
max_vcpus: config.max_vcpus, max_vcpus: config.max_vcpus,
..Default::default() ..Default::default()
@ -143,7 +142,7 @@ impl XenClient {
&self, &self,
domid: u32, domid: u32,
domain: &CreateDomain, domain: &CreateDomain,
config: &DomainConfig<'_>, config: &DomainConfig,
) -> Result<CreatedDomain> { ) -> Result<CreatedDomain> {
trace!( trace!(
"XenClient init domid={} domain={:?} config={:?}", "XenClient init domid={} domain={:?} config={:?}",
@ -237,9 +236,9 @@ impl XenClient {
&Uuid::from_bytes(domain.handle).to_string(), &Uuid::from_bytes(domain.handle).to_string(),
) )
.await?; .await?;
tx.write_string(format!("{}/name", dom_path).as_str(), config.name) tx.write_string(format!("{}/name", dom_path).as_str(), &config.name)
.await?; .await?;
tx.write_string(format!("{}/name", vm_path).as_str(), config.name) tx.write_string(format!("{}/name", vm_path).as_str(), &config.name)
.await?; .await?;
for (key, value) in &config.extra_keys { for (key, value) in &config.extra_keys {
@ -257,7 +256,7 @@ impl XenClient {
self.call.set_max_vcpus(domid, config.max_vcpus).await?; self.call.set_max_vcpus(domid, config.max_vcpus).await?;
self.call.set_max_mem(domid, config.mem_mb * 1024).await?; self.call.set_max_mem(domid, config.mem_mb * 1024).await?;
let image_loader = ElfImageLoader::load_file_kernel(config.kernel_path)?; let image_loader = ElfImageLoader::load_file_kernel(&config.kernel)?;
let xenstore_evtchn: u32; let xenstore_evtchn: u32;
let xenstore_mfn: u64; let xenstore_mfn: u64;
@ -270,18 +269,17 @@ impl XenClient {
let mut arch = Box::new(X86BootSetup::new()) as Box<dyn ArchBootSetup + Send + Sync>; let mut arch = Box::new(X86BootSetup::new()) as Box<dyn ArchBootSetup + Send + Sync>;
#[cfg(target_arch = "aarch64")] #[cfg(target_arch = "aarch64")]
let mut arch = Box::new(Arm64BootSetup::new()) as Box<dyn ArchBootSetup + Send + Sync>; let mut arch = Box::new(Arm64BootSetup::new()) as Box<dyn ArchBootSetup + Send + Sync>;
let initrd = read(config.initrd_path)?;
state = boot state = boot
.initialize( .initialize(
&mut arch, &mut arch,
&image_loader, &image_loader,
initrd.as_slice(), &config.initrd,
config.max_vcpus, config.max_vcpus,
config.mem_mb, config.mem_mb,
1, 1,
) )
.await?; .await?;
boot.boot(&mut arch, &mut state, config.cmdline).await?; boot.boot(&mut arch, &mut state, &config.cmdline).await?;
xenstore_evtchn = state.store_evtchn; xenstore_evtchn = state.store_evtchn;
xenstore_mfn = boot.phys.p2m[state.xenstore_segment.pfn as usize]; xenstore_mfn = boot.phys.p2m[state.xenstore_segment.pfn as usize];
p2m = boot.phys.p2m; p2m = boot.phys.p2m;
@ -291,19 +289,9 @@ impl XenClient {
let tx = self.store.transaction().await?; let tx = self.store.transaction().await?;
tx.write_string(format!("{}/image/os_type", vm_path).as_str(), "linux") tx.write_string(format!("{}/image/os_type", vm_path).as_str(), "linux")
.await?; .await?;
tx.write_string(
format!("{}/image/kernel", vm_path).as_str(),
config.kernel_path,
)
.await?;
tx.write_string(
format!("{}/image/ramdisk", vm_path).as_str(),
config.initrd_path,
)
.await?;
tx.write_string( tx.write_string(
format!("{}/image/cmdline", vm_path).as_str(), format!("{}/image/cmdline", vm_path).as_str(),
config.cmdline, &config.cmdline,
) )
.await?; .await?;
@ -352,7 +340,8 @@ impl XenClient {
&DomainChannel { &DomainChannel {
typ: config typ: config
.use_console_backend .use_console_backend
.unwrap_or("xenconsoled") .clone()
.unwrap_or("xenconsoled".to_string())
.to_string(), .to_string(),
initialized: true, initialized: true,
}, },
@ -429,7 +418,7 @@ impl XenClient {
.await?; .await?;
let channel_path = format!("{}/evtchn/{}", dom_path, channel.name); let channel_path = format!("{}/evtchn/{}", dom_path, channel.name);
self.store self.store
.write_string(&format!("{}/name", channel_path), channel.name) .write_string(&format!("{}/name", channel_path), &channel.name)
.await?; .await?;
self.store self.store
.write_string(&format!("{}/channel", channel_path), &id.to_string()) .write_string(&format!("{}/channel", channel_path), &id.to_string())
@ -447,7 +436,7 @@ impl XenClient {
backend_domid: u32, backend_domid: u32,
domid: u32, domid: u32,
index: usize, index: usize,
disk: &DomainDisk<'_>, disk: &DomainDisk,
) -> Result<()> { ) -> Result<()> {
let id = (202 << 8) | (index << 4) as u64; let id = (202 << 8) | (index << 4) as u64;
let backend_items: Vec<(&str, String)> = vec![ let backend_items: Vec<(&str, String)> = vec![
@ -567,7 +556,7 @@ impl XenClient {
backend_domid: u32, backend_domid: u32,
domid: u32, domid: u32,
index: usize, index: usize,
filesystem: &DomainFilesystem<'_>, filesystem: &DomainFilesystem,
) -> Result<()> { ) -> Result<()> {
let id = 90 + index as u64; let id = 90 + index as u64;
let backend_items: Vec<(&str, String)> = vec![ let backend_items: Vec<(&str, String)> = vec![
@ -605,7 +594,7 @@ impl XenClient {
backend_domid: u32, backend_domid: u32,
domid: u32, domid: u32,
index: usize, index: usize,
vif: &DomainNetworkInterface<'_>, vif: &DomainNetworkInterface,
) -> Result<()> { ) -> Result<()> {
let id = 20 + index as u64; let id = 20 + index as u64;
let mut backend_items: Vec<(&str, String)> = vec![ let mut backend_items: Vec<(&str, String)> = vec![
@ -619,12 +608,12 @@ impl XenClient {
]; ];
if vif.bridge.is_some() { if vif.bridge.is_some() {
backend_items.extend_from_slice(&[("bridge", vif.bridge.unwrap().to_string())]); backend_items.extend_from_slice(&[("bridge", vif.bridge.clone().unwrap())]);
} }
if vif.script.is_some() { if vif.script.is_some() {
backend_items.extend_from_slice(&[ backend_items.extend_from_slice(&[
("script", vif.script.unwrap().to_string()), ("script", vif.script.clone().unwrap()),
("hotplug-status", "".to_string()), ("hotplug-status", "".to_string()),
]); ]);
} else { } else {

View File

@ -5,11 +5,6 @@ TOOLS_DIR="$(dirname "${0}")"
RUST_TARGET="$("${TOOLS_DIR}/target.sh")" RUST_TARGET="$("${TOOLS_DIR}/target.sh")"
CROSS_COMPILE="$("${TOOLS_DIR}/cross-compile.sh")" CROSS_COMPILE="$("${TOOLS_DIR}/cross-compile.sh")"
if [ "${TARGET_LIBC}" = "musl" ] && [ -f "/etc/alpine-release" ]
then
export RUSTFLAGS="-Ctarget-feature=-crt-static"
fi
if [ -z "${CARGO}" ] if [ -z "${CARGO}" ]
then then
if [ "${CROSS_COMPILE}" = "1" ] && command -v cross > /dev/null if [ "${CROSS_COMPILE}" = "1" ] && command -v cross > /dev/null

View File

@ -1,6 +1,17 @@
#!/bin/sh #!/bin/sh
set -e set -e
if [ -z "${TARGET_LIBC}" ] && [ -e "/etc/alpine-release" ] && [ "${KRATA_TARGET_IGNORE_LIBC}" != "1" ]
then
TARGET_LIBC="musl"
TARGET_VENDOR="alpine"
fi
if [ -z "${TARGET_VENDOR}" ]
then
TARGET_VENDOR="unknown"
fi
if [ -z "${TARGET_LIBC}" ] || [ "${KRATA_TARGET_IGNORE_LIBC}" = "1" ] if [ -z "${TARGET_LIBC}" ] || [ "${KRATA_TARGET_IGNORE_LIBC}" = "1" ]
then then
TARGET_LIBC="gnu" TARGET_LIBC="gnu"
@ -46,20 +57,20 @@ elif [ "${TARGET_OS}" = "freebsd" ]
then then
if [ -z "${RUST_TARGET}" ] if [ -z "${RUST_TARGET}" ]
then then
[ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-unknown-freebsd" [ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-${TARGET_VENDOR}-freebsd"
fi fi
elif [ "${TARGET_OS}" = "netbsd" ] elif [ "${TARGET_OS}" = "netbsd" ]
then then
if [ -z "${RUST_TARGET}" ] if [ -z "${RUST_TARGET}" ]
then then
[ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-unknown-netbsd" [ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-${TARGET_VENDOR}-netbsd"
fi fi
else else
if [ -z "${RUST_TARGET}" ] if [ -z "${RUST_TARGET}" ]
then then
[ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-unknown-linux-${TARGET_LIBC}" [ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-${TARGET_VENDOR}-linux-${TARGET_LIBC}"
[ "${TARGET_ARCH}" = "aarch64" ] && RUST_TARGET="aarch64-unknown-linux-${TARGET_LIBC}" [ "${TARGET_ARCH}" = "aarch64" ] && RUST_TARGET="aarch64-${TARGET_VENDOR}-linux-${TARGET_LIBC}"
[ "${TARGET_ARCH}" = "riscv64gc" ] && RUST_TARGET="riscv64gc-unknown-linux-${TARGET_LIBC}" [ "${TARGET_ARCH}" = "riscv64gc" ] && RUST_TARGET="riscv64gc-${TARGET_VENDOR}-linux-${TARGET_LIBC}"
fi fi
fi fi

View File

@ -43,7 +43,7 @@ do
asset "${SOURCE_FILE_PATH}" "target/assets/krata_${TAG_NAME}_${PLATFORM}.deb" asset "${SOURCE_FILE_PATH}" "target/assets/krata_${TAG_NAME}_${PLATFORM}.deb"
elif [ "${FORM}" = "alpine" ] elif [ "${FORM}" = "alpine" ]
then then
asset "${SOURCE_FILE_PATH}" "target/assets/krata_${TAG_NAME}_${PLATFORM}.deb" asset "${SOURCE_FILE_PATH}" "target/assets/krata_${TAG_NAME}_${PLATFORM}.apk"
elif [ "${FORM}" = "bundle-systemd" ] elif [ "${FORM}" = "bundle-systemd" ]
then then
asset "${SOURCE_FILE_PATH}" "target/assets/krata-systemd_${TAG_NAME}_${PLATFORM}.tgz" asset "${SOURCE_FILE_PATH}" "target/assets/krata-systemd_${TAG_NAME}_${PLATFORM}.tgz"

View File

@ -28,5 +28,5 @@ build_and_run() {
fi fi
RUST_TARGET="$(./hack/build/target.sh)" RUST_TARGET="$(./hack/build/target.sh)"
./hack/build/cargo.sh build ${CARGO_BUILD_FLAGS} --bin "${EXE_TARGET}" ./hack/build/cargo.sh build ${CARGO_BUILD_FLAGS} --bin "${EXE_TARGET}"
exec sudo RUST_LOG="${RUST_LOG}" "target/${RUST_TARGET}/debug/${EXE_TARGET}" "${@}" exec sudo sh -c "RUST_LOG='${RUST_LOG}' 'target/${RUST_TARGET}/debug/${EXE_TARGET}' $*"
} }

2
hack/dist/apk.sh vendored
View File

@ -19,6 +19,8 @@ fpm -s tar -t apk \
--license agpl3 \ --license agpl3 \
--version "${KRATA_VERSION}" \ --version "${KRATA_VERSION}" \
--architecture "${TARGET_ARCH}" \ --architecture "${TARGET_ARCH}" \
--depends "squashfs-tools" \
--depends "erofs-utils" \
--description "Krata Hypervisor" \ --description "Krata Hypervisor" \
--url "https://krata.dev" \ --url "https://krata.dev" \
--maintainer "Edera Team <contact@edera.dev>" \ --maintainer "Edera Team <contact@edera.dev>" \

2
hack/dist/deb.sh vendored
View File

@ -20,6 +20,8 @@ fpm -s tar -t deb \
--version "${KRATA_VERSION}" \ --version "${KRATA_VERSION}" \
--architecture "${TARGET_ARCH_DEBIAN}" \ --architecture "${TARGET_ARCH_DEBIAN}" \
--depends "xen-system-${TARGET_ARCH_DEBIAN}" \ --depends "xen-system-${TARGET_ARCH_DEBIAN}" \
--depends "squashfs-tools" \
--depends "erofs-utils" \
--description "Krata Hypervisor" \ --description "Krata Hypervisor" \
--url "https://krata.dev" \ --url "https://krata.dev" \
--maintainer "Edera Team <contact@edera.dev>" \ --maintainer "Edera Team <contact@edera.dev>" \

View File

@ -26,7 +26,7 @@ KERNEL_SRC="${KERNEL_DIR}/linux-${KERNEL_VERSION}-${TARGET_ARCH_STANDARD}"
if [ -z "${KRATA_KERNEL_BUILD_JOBS}" ] if [ -z "${KRATA_KERNEL_BUILD_JOBS}" ]
then then
KRATA_KERNEL_BUILD_JOBS="2" KRATA_KERNEL_BUILD_JOBS="$(nproc)"
fi fi
if [ ! -f "${KERNEL_SRC}/Makefile" ] if [ ! -f "${KERNEL_SRC}/Makefile" ]

View File

@ -1,6 +1,6 @@
# #
# Automatically generated file; DO NOT EDIT. # Automatically generated file; DO NOT EDIT.
# Linux/x86_64 6.8.2 Kernel Configuration # Linux/x86 6.7.2 Kernel Configuration
# #
CONFIG_CC_VERSION_TEXT="gcc (Debian 13.2.0-23) 13.2.0" CONFIG_CC_VERSION_TEXT="gcc (Debian 13.2.0-23) 13.2.0"
CONFIG_CC_IS_GCC=y CONFIG_CC_IS_GCC=y
@ -15,7 +15,6 @@ CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y
CONFIG_GCC_ASM_GOTO_OUTPUT_WORKAROUND=y
CONFIG_TOOLS_SUPPORT_RELR=y CONFIG_TOOLS_SUPPORT_RELR=y
CONFIG_CC_HAS_ASM_INLINE=y CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y
@ -188,10 +187,8 @@ CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
CONFIG_CC_HAS_INT128=y CONFIG_CC_HAS_INT128=y
CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5" CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5"
CONFIG_GCC10_NO_ARRAY_BOUNDS=y CONFIG_GCC11_NO_ARRAY_BOUNDS=y
CONFIG_CC_NO_ARRAY_BOUNDS=y CONFIG_CC_NO_ARRAY_BOUNDS=y
CONFIG_GCC_NO_STRINGOP_OVERFLOW=y
CONFIG_CC_NO_STRINGOP_OVERFLOW=y
CONFIG_ARCH_SUPPORTS_INT128=y CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_CGROUPS=y CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y CONFIG_PAGE_COUNTER=y
@ -270,19 +267,19 @@ CONFIG_AIO=y
CONFIG_IO_URING=y CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y CONFIG_ADVISE_SYSCALLS=y
CONFIG_MEMBARRIER=y CONFIG_MEMBARRIER=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_CACHESTAT_SYSCALL=y
# CONFIG_PC104 is not set
CONFIG_KALLSYMS=y CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_SELFTEST is not set # CONFIG_KALLSYMS_SELFTEST is not set
CONFIG_KALLSYMS_ALL=y CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
CONFIG_KALLSYMS_BASE_RELATIVE=y CONFIG_KALLSYMS_BASE_RELATIVE=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
CONFIG_CACHESTAT_SYSCALL=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_HAVE_PERF_EVENTS=y CONFIG_HAVE_PERF_EVENTS=y
CONFIG_GUEST_PERF_EVENTS=y CONFIG_GUEST_PERF_EVENTS=y
# CONFIG_PC104 is not set
# #
# Kernel Performance Events And Counters # Kernel Performance Events And Counters
@ -382,7 +379,6 @@ CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6 CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6 CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y CONFIG_X86_TSC=y
CONFIG_X86_HAVE_PAE=y
CONFIG_X86_CMPXCHG64=y CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64 CONFIG_X86_MINIMUM_CPU_FAMILY=64
@ -460,6 +456,7 @@ CONFIG_X86_INTEL_TSX_MODE_OFF=y
# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set # CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
# CONFIG_X86_SGX is not set # CONFIG_X86_SGX is not set
# CONFIG_X86_USER_SHADOW_STACK is not set # CONFIG_X86_USER_SHADOW_STACK is not set
# CONFIG_INTEL_TDX_HOST is not set
CONFIG_EFI=y CONFIG_EFI=y
CONFIG_EFI_STUB=y CONFIG_EFI_STUB=y
CONFIG_EFI_HANDOVER_PROTOCOL=y CONFIG_EFI_HANDOVER_PROTOCOL=y
@ -522,7 +519,6 @@ CONFIG_CPU_IBRS_ENTRY=y
CONFIG_CPU_SRSO=y CONFIG_CPU_SRSO=y
# CONFIG_SLS is not set # CONFIG_SLS is not set
# CONFIG_GDS_FORCE_MITIGATION is not set # CONFIG_GDS_FORCE_MITIGATION is not set
CONFIG_MITIGATION_RFDS=y
CONFIG_ARCH_HAS_ADD_PAGES=y CONFIG_ARCH_HAS_ADD_PAGES=y
# #
@ -547,7 +543,6 @@ CONFIG_ACPI=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
CONFIG_ACPI_THERMAL_LIB=y
# CONFIG_ACPI_DEBUGGER is not set # CONFIG_ACPI_DEBUGGER is not set
CONFIG_ACPI_SPCR_TABLE=y CONFIG_ACPI_SPCR_TABLE=y
# CONFIG_ACPI_FPDT is not set # CONFIG_ACPI_FPDT is not set
@ -674,13 +669,14 @@ CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
# end of Binary Emulations # end of Binary Emulations
CONFIG_HAVE_KVM=y CONFIG_HAVE_KVM=y
CONFIG_KVM_COMMON=y
CONFIG_HAVE_KVM_PFNCACHE=y CONFIG_HAVE_KVM_PFNCACHE=y
CONFIG_HAVE_KVM_IRQCHIP=y CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQFD=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_DIRTY_RING=y CONFIG_HAVE_KVM_DIRTY_RING=y
CONFIG_HAVE_KVM_DIRTY_RING_TSO=y CONFIG_HAVE_KVM_DIRTY_RING_TSO=y
CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL=y CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_MMIO=y CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y CONFIG_HAVE_KVM_MSI=y
@ -693,16 +689,13 @@ CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_KVM_XFER_TO_GUEST_WORK=y CONFIG_KVM_XFER_TO_GUEST_WORK=y
CONFIG_HAVE_KVM_PM_NOTIFIER=y CONFIG_HAVE_KVM_PM_NOTIFIER=y
CONFIG_KVM_GENERIC_HARDWARE_ENABLING=y CONFIG_KVM_GENERIC_HARDWARE_ENABLING=y
CONFIG_KVM_GENERIC_MMU_NOTIFIER=y
CONFIG_VIRTUALIZATION=y CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m CONFIG_KVM=m
CONFIG_KVM_WERROR=y CONFIG_KVM_WERROR=y
# CONFIG_KVM_SW_PROTECTED_VM is not set
CONFIG_KVM_INTEL=m CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m CONFIG_KVM_AMD=m
CONFIG_KVM_AMD_SEV=y CONFIG_KVM_AMD_SEV=y
CONFIG_KVM_SMM=y CONFIG_KVM_SMM=y
CONFIG_KVM_HYPERV=y
# CONFIG_KVM_XEN is not set # CONFIG_KVM_XEN is not set
# CONFIG_KVM_PROVE_MMU is not set # CONFIG_KVM_PROVE_MMU is not set
CONFIG_KVM_MAX_NR_VCPUS=1024 CONFIG_KVM_MAX_NR_VCPUS=1024
@ -853,7 +846,6 @@ CONFIG_ARCH_SUPPORTS_PAGE_TABLE_CHECK=y
CONFIG_ARCH_HAS_ELFCORE_COMPAT=y CONFIG_ARCH_HAS_ELFCORE_COMPAT=y
CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH=y CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH=y
CONFIG_DYNAMIC_SIGFRAME=y CONFIG_DYNAMIC_SIGFRAME=y
CONFIG_ARCH_HAS_HW_PTE_YOUNG=y
CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y
# #
@ -910,7 +902,6 @@ CONFIG_BLK_ICQ=y
CONFIG_BLK_DEV_BSGLIB=y CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_INTEGRITY_T10=m CONFIG_BLK_DEV_INTEGRITY_T10=m
CONFIG_BLK_DEV_WRITE_MOUNTED=y
# CONFIG_BLK_DEV_ZONED is not set # CONFIG_BLK_DEV_ZONED is not set
CONFIG_BLK_DEV_THROTTLING=y CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_DEV_THROTTLING_LOW is not set # CONFIG_BLK_DEV_THROTTLING_LOW is not set
@ -1000,7 +991,6 @@ CONFIG_SWAP=y
CONFIG_ZSWAP=y CONFIG_ZSWAP=y
# CONFIG_ZSWAP_DEFAULT_ON is not set # CONFIG_ZSWAP_DEFAULT_ON is not set
# CONFIG_ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON is not set # CONFIG_ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON is not set
# CONFIG_ZSWAP_SHRINKER_DEFAULT_ON is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set # CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set # CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
@ -1019,8 +1009,9 @@ CONFIG_ZSMALLOC=y
CONFIG_ZSMALLOC_CHAIN_SIZE=8 CONFIG_ZSMALLOC_CHAIN_SIZE=8
# #
# Slab allocator options # SLAB allocator options
# #
# CONFIG_SLAB_DEPRECATED is not set
CONFIG_SLUB=y CONFIG_SLUB=y
# CONFIG_SLUB_TINY is not set # CONFIG_SLUB_TINY is not set
CONFIG_SLAB_MERGE_DEFAULT=y CONFIG_SLAB_MERGE_DEFAULT=y
@ -1029,7 +1020,7 @@ CONFIG_SLAB_FREELIST_RANDOM=y
# CONFIG_SLUB_STATS is not set # CONFIG_SLUB_STATS is not set
CONFIG_SLUB_CPU_PARTIAL=y CONFIG_SLUB_CPU_PARTIAL=y
# CONFIG_RANDOM_KMALLOC_CACHES is not set # CONFIG_RANDOM_KMALLOC_CACHES is not set
# end of Slab allocator options # end of SLAB allocator options
# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set # CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set
# CONFIG_COMPAT_BRK is not set # CONFIG_COMPAT_BRK is not set
@ -1071,7 +1062,6 @@ CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set # CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
# CONFIG_TRANSPARENT_HUGEPAGE_NEVER is not set
CONFIG_THP_SWAP=y CONFIG_THP_SWAP=y
# CONFIG_READ_ONLY_THP_FOR_FS is not set # CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
@ -1102,7 +1092,6 @@ CONFIG_SECRETMEM=y
CONFIG_LRU_GEN=y CONFIG_LRU_GEN=y
CONFIG_LRU_GEN_ENABLED=y CONFIG_LRU_GEN_ENABLED=y
# CONFIG_LRU_GEN_STATS is not set # CONFIG_LRU_GEN_STATS is not set
CONFIG_LRU_GEN_WALKS_MMU=y
CONFIG_ARCH_SUPPORTS_PER_VMA_LOCK=y CONFIG_ARCH_SUPPORTS_PER_VMA_LOCK=y
CONFIG_PER_VMA_LOCK=y CONFIG_PER_VMA_LOCK=y
CONFIG_LOCK_MM_AND_FIND_VMA=y CONFIG_LOCK_MM_AND_FIND_VMA=y
@ -1600,6 +1589,7 @@ CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_NFLOG=m CONFIG_BRIDGE_EBT_NFLOG=m
# CONFIG_BPFILTER is not set
CONFIG_IP_DCCP=m CONFIG_IP_DCCP=m
CONFIG_INET_DCCP_DIAG=m CONFIG_INET_DCCP_DIAG=m
@ -1935,7 +1925,6 @@ CONFIG_ALLOW_DEV_COREDUMP=y
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set # CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set # CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_SYS_HYPERVISOR=y CONFIG_SYS_HYPERVISOR=y
CONFIG_GENERIC_CPU_DEVICES=y
CONFIG_GENERIC_CPU_AUTOPROBE=y CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y CONFIG_REGMAP=y
@ -2040,7 +2029,6 @@ CONFIG_ZRAM_DEF_COMP_LZ4=y
# CONFIG_ZRAM_DEF_COMP_LZ4HC is not set # CONFIG_ZRAM_DEF_COMP_LZ4HC is not set
CONFIG_ZRAM_DEF_COMP="lz4" CONFIG_ZRAM_DEF_COMP="lz4"
# CONFIG_ZRAM_WRITEBACK is not set # CONFIG_ZRAM_WRITEBACK is not set
# CONFIG_ZRAM_TRACK_ENTRY_ACTIME is not set
# CONFIG_ZRAM_MEMORY_TRACKING is not set # CONFIG_ZRAM_MEMORY_TRACKING is not set
# CONFIG_ZRAM_MULTI_COMP is not set # CONFIG_ZRAM_MULTI_COMP is not set
CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_LOOP=m
@ -2101,7 +2089,6 @@ CONFIG_VMWARE_BALLOON=m
# CONFIG_DW_XDATA_PCIE is not set # CONFIG_DW_XDATA_PCIE is not set
# CONFIG_PCI_ENDPOINT_TEST is not set # CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set # CONFIG_XILINX_SDFEC is not set
# CONFIG_NSM is not set
# CONFIG_C2PORT is not set # CONFIG_C2PORT is not set
# #
@ -2128,6 +2115,8 @@ CONFIG_VMWARE_BALLOON=m
# #
# CONFIG_ALTERA_STAPL is not set # CONFIG_ALTERA_STAPL is not set
# CONFIG_INTEL_MEI is not set # CONFIG_INTEL_MEI is not set
# CONFIG_INTEL_MEI_ME is not set
# CONFIG_INTEL_MEI_TXE is not set
CONFIG_VMWARE_VMCI=m CONFIG_VMWARE_VMCI=m
# CONFIG_GENWQE is not set # CONFIG_GENWQE is not set
# CONFIG_ECHO is not set # CONFIG_ECHO is not set
@ -2341,10 +2330,13 @@ CONFIG_MD=y
CONFIG_BLK_DEV_MD=y CONFIG_BLK_DEV_MD=y
# CONFIG_MD_AUTODETECT is not set # CONFIG_MD_AUTODETECT is not set
CONFIG_MD_BITMAP_FILE=y CONFIG_MD_BITMAP_FILE=y
# CONFIG_MD_LINEAR is not set
CONFIG_MD_RAID0=m CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m CONFIG_MD_RAID456=m
# CONFIG_MD_MULTIPATH is not set
# CONFIG_MD_FAULTY is not set
# CONFIG_MD_CLUSTER is not set # CONFIG_MD_CLUSTER is not set
CONFIG_BCACHE=m CONFIG_BCACHE=m
# CONFIG_BCACHE_DEBUG is not set # CONFIG_BCACHE_DEBUG is not set
@ -2564,6 +2556,7 @@ CONFIG_NET_VENDOR_WANGXUN=y
# CONFIG_NET_VENDOR_WIZNET is not set # CONFIG_NET_VENDOR_WIZNET is not set
CONFIG_NET_VENDOR_XILINX=y CONFIG_NET_VENDOR_XILINX=y
# CONFIG_XILINX_EMACLITE is not set # CONFIG_XILINX_EMACLITE is not set
# CONFIG_XILINX_AXI_EMAC is not set
# CONFIG_XILINX_LL_TEMAC is not set # CONFIG_XILINX_LL_TEMAC is not set
# CONFIG_FDDI is not set # CONFIG_FDDI is not set
# CONFIG_HIPPI is not set # CONFIG_HIPPI is not set
@ -2622,7 +2615,6 @@ CONFIG_FIXED_PHY=m
# CONFIG_DP83867_PHY is not set # CONFIG_DP83867_PHY is not set
# CONFIG_DP83869_PHY is not set # CONFIG_DP83869_PHY is not set
# CONFIG_DP83TD510_PHY is not set # CONFIG_DP83TD510_PHY is not set
# CONFIG_DP83TG720_PHY is not set
# CONFIG_VITESSE_PHY is not set # CONFIG_VITESSE_PHY is not set
# CONFIG_XILINX_GMII2RGMII is not set # CONFIG_XILINX_GMII2RGMII is not set
# CONFIG_PSE_CONTROLLER is not set # CONFIG_PSE_CONTROLLER is not set
@ -3081,13 +3073,13 @@ CONFIG_HWMON=m
# CONFIG_SENSORS_DRIVETEMP is not set # CONFIG_SENSORS_DRIVETEMP is not set
# CONFIG_SENSORS_DS620 is not set # CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set # CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_DELL_SMM is not set
# CONFIG_SENSORS_I5K_AMB is not set # CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set # CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set # CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set # CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set # CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_FTSTEUTATES is not set # CONFIG_SENSORS_FTSTEUTATES is not set
# CONFIG_SENSORS_GIGABYTE_WATERFORCE is not set
# CONFIG_SENSORS_GL518SM is not set # CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set # CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_G760A is not set # CONFIG_SENSORS_G760A is not set
@ -3219,7 +3211,6 @@ CONFIG_SENSORS_ACPI_POWER=m
CONFIG_THERMAL=y CONFIG_THERMAL=y
# CONFIG_THERMAL_NETLINK is not set # CONFIG_THERMAL_NETLINK is not set
# CONFIG_THERMAL_STATISTICS is not set # CONFIG_THERMAL_STATISTICS is not set
# CONFIG_THERMAL_DEBUGFS is not set
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0 CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
# CONFIG_THERMAL_WRITABLE_TRIPS is not set # CONFIG_THERMAL_WRITABLE_TRIPS is not set
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
@ -3369,6 +3360,7 @@ CONFIG_BCMA_POSSIBLE=y
# CONFIG_MFD_SM501 is not set # CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SKY81452 is not set # CONFIG_MFD_SKY81452 is not set
# CONFIG_MFD_SYSCON is not set # CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set # CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_TI_LMU is not set # CONFIG_MFD_TI_LMU is not set
# CONFIG_TPS6105X is not set # CONFIG_TPS6105X is not set
@ -3436,7 +3428,6 @@ CONFIG_DRM_GEM_SHMEM_HELPER=m
# CONFIG_DRM_AMDGPU is not set # CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set # CONFIG_DRM_NOUVEAU is not set
# CONFIG_DRM_I915 is not set # CONFIG_DRM_I915 is not set
# CONFIG_DRM_XE is not set
# CONFIG_DRM_VGEM is not set # CONFIG_DRM_VGEM is not set
# CONFIG_DRM_VKMS is not set # CONFIG_DRM_VKMS is not set
CONFIG_DRM_VMWGFX=m CONFIG_DRM_VMWGFX=m
@ -3464,6 +3455,7 @@ CONFIG_DRM_PANEL_BRIDGE=y
# CONFIG_DRM_ANALOGIX_ANX78XX is not set # CONFIG_DRM_ANALOGIX_ANX78XX is not set
# end of Display Interface Bridges # end of Display Interface Bridges
# CONFIG_DRM_LOONGSON is not set
# CONFIG_DRM_ETNAVIV is not set # CONFIG_DRM_ETNAVIV is not set
CONFIG_DRM_BOCHS=m CONFIG_DRM_BOCHS=m
CONFIG_DRM_CIRRUS_QEMU=m CONFIG_DRM_CIRRUS_QEMU=m
@ -3475,6 +3467,7 @@ CONFIG_DRM_VBOXVIDEO=m
# CONFIG_DRM_GUD is not set # CONFIG_DRM_GUD is not set
# CONFIG_DRM_SSD130X is not set # CONFIG_DRM_SSD130X is not set
CONFIG_DRM_HYPERV=m CONFIG_DRM_HYPERV=m
# CONFIG_DRM_LEGACY is not set
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=m CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=m
# #
@ -3494,6 +3487,7 @@ CONFIG_FB=m
# CONFIG_FB_NVIDIA is not set # CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set # CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set # CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set # CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set # CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set # CONFIG_FB_ATY128 is not set
@ -3528,8 +3522,9 @@ CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set # CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYSMEM_FOPS=m CONFIG_FB_SYS_FOPS=m
CONFIG_FB_DEFERRED_IO=y CONFIG_FB_DEFERRED_IO=y
CONFIG_FB_IOMEM_FOPS=m
CONFIG_FB_SYSMEM_HELPERS=y CONFIG_FB_SYSMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS_DEFERRED=y CONFIG_FB_SYSMEM_HELPERS_DEFERRED=y
# CONFIG_FB_MODE_HELPERS is not set # CONFIG_FB_MODE_HELPERS is not set
@ -4132,7 +4127,6 @@ CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=y CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_ADMIN_LEGACY=y
CONFIG_VIRTIO_PCI_LEGACY=y CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_VDPA=m CONFIG_VIRTIO_VDPA=m
# CONFIG_VIRTIO_PMEM is not set # CONFIG_VIRTIO_PMEM is not set
@ -4312,7 +4306,6 @@ CONFIG_RPMSG_VIRTIO=m
# #
# Qualcomm SoC drivers # Qualcomm SoC drivers
# #
# CONFIG_QCOM_PMIC_PDCHARGER_ULOG is not set
# end of Qualcomm SoC drivers # end of Qualcomm SoC drivers
# CONFIG_SOC_TI is not set # CONFIG_SOC_TI is not set
@ -4387,7 +4380,6 @@ CONFIG_MEMORY=y
# #
# Performance monitor support # Performance monitor support
# #
# CONFIG_DWC_PCIE_PMU is not set
# end of Performance monitor support # end of Performance monitor support
# CONFIG_RAS is not set # CONFIG_RAS is not set
@ -4408,7 +4400,14 @@ CONFIG_DAX=y
# CONFIG_DEV_DAX is not set # CONFIG_DEV_DAX is not set
CONFIG_NVMEM=y CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y CONFIG_NVMEM_SYSFS=y
# CONFIG_NVMEM_LAYOUTS is not set
#
# Layout Types
#
# CONFIG_NVMEM_LAYOUT_SL28_VPD is not set
# CONFIG_NVMEM_LAYOUT_ONIE_TLV is not set
# end of Layout Types
# CONFIG_NVMEM_RMEM is not set # CONFIG_NVMEM_RMEM is not set
# #
@ -4435,7 +4434,6 @@ CONFIG_NVMEM_SYSFS=y
CONFIG_DCACHE_WORD_ACCESS=y CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_VALIDATE_FS_PARSER is not set # CONFIG_VALIDATE_FS_PARSER is not set
CONFIG_FS_IOMAP=y CONFIG_FS_IOMAP=y
CONFIG_FS_STACK=y
CONFIG_BUFFER_HEAD=y CONFIG_BUFFER_HEAD=y
CONFIG_LEGACY_DIRECT_IO=y CONFIG_LEGACY_DIRECT_IO=y
CONFIG_EXT2_FS=m CONFIG_EXT2_FS=m
@ -4527,7 +4525,7 @@ CONFIG_QFMT_V1=m
CONFIG_QFMT_V2=m CONFIG_QFMT_V2=m
CONFIG_QUOTACTL=y CONFIG_QUOTACTL=y
CONFIG_AUTOFS_FS=m CONFIG_AUTOFS_FS=m
CONFIG_FUSE_FS=m CONFIG_FUSE_FS=y
# CONFIG_CUSE is not set # CONFIG_CUSE is not set
CONFIG_VIRTIO_FS=m CONFIG_VIRTIO_FS=m
CONFIG_OVERLAY_FS=y CONFIG_OVERLAY_FS=y
@ -4598,9 +4596,9 @@ CONFIG_TMPFS_XATTR=y
CONFIG_TMPFS_INODE64=y CONFIG_TMPFS_INODE64=y
# CONFIG_TMPFS_QUOTA is not set # CONFIG_TMPFS_QUOTA is not set
CONFIG_HUGETLBFS=y CONFIG_HUGETLBFS=y
# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set
CONFIG_HUGETLB_PAGE=y CONFIG_HUGETLB_PAGE=y
CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y
# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
CONFIG_CONFIGFS_FS=y CONFIG_CONFIGFS_FS=y
CONFIG_EFIVAR_FS=y CONFIG_EFIVAR_FS=y
@ -4656,7 +4654,16 @@ CONFIG_SYSV_FS=m
CONFIG_UFS_FS=m CONFIG_UFS_FS=m
# CONFIG_UFS_FS_WRITE is not set # CONFIG_UFS_FS_WRITE is not set
# CONFIG_UFS_DEBUG is not set # CONFIG_UFS_DEBUG is not set
# CONFIG_EROFS_FS is not set CONFIG_EROFS_FS=y
# CONFIG_EROFS_FS_DEBUG is not set
CONFIG_EROFS_FS_XATTR=y
CONFIG_EROFS_FS_POSIX_ACL=y
CONFIG_EROFS_FS_SECURITY=y
CONFIG_EROFS_FS_ZIP=y
CONFIG_EROFS_FS_ZIP_LZMA=y
CONFIG_EROFS_FS_ZIP_DEFLATE=y
CONFIG_EROFS_FS_PCPU_KTHREAD=y
CONFIG_EROFS_FS_PCPU_KTHREAD_HIPRI=y
CONFIG_VBOXSF_FS=m CONFIG_VBOXSF_FS=m
CONFIG_NETWORK_FILESYSTEMS=y CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=m CONFIG_NFS_FS=m
@ -4688,7 +4695,6 @@ CONFIG_NFSD_SCSILAYOUT=y
CONFIG_NFSD_FLEXFILELAYOUT=y CONFIG_NFSD_FLEXFILELAYOUT=y
# CONFIG_NFSD_V4_2_INTER_SSC is not set # CONFIG_NFSD_V4_2_INTER_SSC is not set
# CONFIG_NFSD_V4_SECURITY_LABEL is not set # CONFIG_NFSD_V4_SECURITY_LABEL is not set
# CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set
CONFIG_GRACE_PERIOD=m CONFIG_GRACE_PERIOD=m
CONFIG_LOCKD=m CONFIG_LOCKD=m
CONFIG_LOCKD_V4=y CONFIG_LOCKD_V4=y
@ -4948,12 +4954,14 @@ CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_ADIANTUM=m CONFIG_CRYPTO_ADIANTUM=m
CONFIG_CRYPTO_CHACHA20=m CONFIG_CRYPTO_CHACHA20=m
CONFIG_CRYPTO_CBC=y CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CFB is not set
CONFIG_CRYPTO_CTR=m CONFIG_CRYPTO_CTR=m
CONFIG_CRYPTO_CTS=y CONFIG_CRYPTO_CTS=y
CONFIG_CRYPTO_ECB=y CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_HCTR2=m CONFIG_CRYPTO_HCTR2=m
CONFIG_CRYPTO_KEYWRAP=m CONFIG_CRYPTO_KEYWRAP=m
CONFIG_CRYPTO_LRW=m CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_OFB is not set
CONFIG_CRYPTO_PCBC=m CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_XCTR=m CONFIG_CRYPTO_XCTR=m
CONFIG_CRYPTO_XTS=y CONFIG_CRYPTO_XTS=y
@ -5002,7 +5010,7 @@ CONFIG_CRYPTO_XXHASH=m
# #
# CRCs (cyclic redundancy checks) # CRCs (cyclic redundancy checks)
# #
CONFIG_CRYPTO_CRC32C=m CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32=m CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_CRCT10DIF=y CONFIG_CRYPTO_CRCT10DIF=y
CONFIG_CRYPTO_CRC64_ROCKSOFT=m CONFIG_CRYPTO_CRC64_ROCKSOFT=m
@ -5108,7 +5116,6 @@ CONFIG_CRYPTO_DEV_QAT=m
# CONFIG_CRYPTO_DEV_QAT_C3XXX is not set # CONFIG_CRYPTO_DEV_QAT_C3XXX is not set
# CONFIG_CRYPTO_DEV_QAT_C62X is not set # CONFIG_CRYPTO_DEV_QAT_C62X is not set
CONFIG_CRYPTO_DEV_QAT_4XXX=m CONFIG_CRYPTO_DEV_QAT_4XXX=m
# CONFIG_CRYPTO_DEV_QAT_420XX is not set
CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set # CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set
# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set # CONFIG_CRYPTO_DEV_QAT_C62XVF is not set
@ -5196,7 +5203,7 @@ CONFIG_CRC32_SLICEBY8=y
CONFIG_CRC64=m CONFIG_CRC64=m
# CONFIG_CRC4 is not set # CONFIG_CRC4 is not set
CONFIG_CRC7=m CONFIG_CRC7=m
CONFIG_LIBCRC32C=m CONFIG_LIBCRC32C=y
CONFIG_CRC8=m CONFIG_CRC8=m
CONFIG_XXHASH=y CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set # CONFIG_RANDOM32_SELFTEST is not set
@ -5216,7 +5223,7 @@ CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_ARM=y CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y CONFIG_XZ_DEC_SPARC=y
# CONFIG_XZ_DEC_MICROLZMA is not set CONFIG_XZ_DEC_MICROLZMA=y
CONFIG_XZ_DEC_BCJ=y CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set # CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y CONFIG_DECOMPRESS_GZIP=y
@ -5312,7 +5319,7 @@ CONFIG_DEBUG_MISC=y
# Compile-time checks and compiler options # Compile-time checks and compiler options
# #
CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO=y
CONFIG_AS_HAS_NON_CONST_ULEB128=y CONFIG_AS_HAS_NON_CONST_LEB128=y
# CONFIG_DEBUG_INFO_NONE is not set # CONFIG_DEBUG_INFO_NONE is not set
# CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set # CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set
# CONFIG_DEBUG_INFO_DWARF4 is not set # CONFIG_DEBUG_INFO_DWARF4 is not set