Compare commits

...

16 Commits

Author SHA1 Message Date
218f848170 chore: release (#41)
Signed-off-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
Co-authored-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
2024-04-15 19:15:00 +00:00
9d8c516a29 build(deps): bump sysinfo from 0.30.9 to 0.30.10 (#86)
Bumps [sysinfo](https://github.com/GuillaumeGomez/sysinfo) from 0.30.9 to 0.30.10.
- [Changelog](https://github.com/GuillaumeGomez/sysinfo/blob/v0.30.10/CHANGELOG.md)
- [Commits](https://github.com/GuillaumeGomez/sysinfo/compare/v0.30.9...v0.30.10)

---
updated-dependencies:
- dependency-name: sysinfo
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-15 17:24:41 +00:00
89055ef77c feat: oci compliance work (#85)
* chore: rework oci crate to be more composable

* feat: image pull is now internally explicit

* feat: utilize vfs for assembling oci images

* feat: rework oci to preserve permissions via a vfs
2024-04-15 17:24:14 +00:00
24c71e9725 feat: oci packer can now use mksquashfs if available (#70)
* feat: oci packer can now use mksquashfs if available

* fix: use nproc in kernel build script for default jobs, and fix DEV.md guide

* feat: working erofs backend
2024-04-15 00:19:38 +00:00
0a6a112133 feat: basic kratactl top command (#72)
* feat: basic kratactl top command

* fix: use magic bytes 0xff 0xff in idm to improve reliability
2024-04-14 22:32:34 +00:00
1627cbcdd7 feat: idm snooping (#71)
Implement IDM snooping, a new feature that lets you snoop on messages between guests and the host. The feature exposes the IDM packets send and receives
to the API, allowing kratactl to now listen for messages and feed them to a user for debugging purposes.
2024-04-14 11:54:21 +00:00
f8247f13e4 build: use LTO for release builds and strip guestinit (#68)
* initrd: strip guestinit binary before adding it to initramfs

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>

* build: use LTO for release profile artifacts

this allows us to save ~25-30% on binary sizes, at least in guestinit

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>

* revert strip command usage, breaks arm

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>

* build: use strip=symbols

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>

---------

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>
2024-04-13 09:20:24 +00:00
6d07112e3d feat: implement oci image progress (#64)
* feat: oci progress events

* feat: oci progress bars on launch
2024-04-12 18:09:26 +00:00
6cef03bffa debug: common: run programs in a way that is compatible with alpine doas-sudo-shim (#53)
doas sudo shim (as used by Alpine) does not support passing through environment variables
in the same way that sudo does, therefore use `sh -c` instead.

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>
2024-04-12 09:20:34 +00:00
73fd95dbe2 guest: init: default to xterm if TERM is not set (#52)
Most terminal emulators support the xterm control codes more faithfully than the
vt100 ones.

Fixes #51.

Signed-off-by: Ariadne Conill <ariadne@ariadne.space>
2024-04-12 08:52:18 +00:00
f41a1e2168 build: target: use alpine rust triplets when building on alpine (#49)
Signed-off-by: Ariadne Conill <ariadne@ariadne.space>
2024-04-12 08:40:44 +00:00
346cf4a7fa build(deps): bump async-trait from 0.1.79 to 0.1.80 (#48)
Bumps [async-trait](https://github.com/dtolnay/async-trait) from 0.1.79 to 0.1.80.
- [Release notes](https://github.com/dtolnay/async-trait/releases)
- [Commits](https://github.com/dtolnay/async-trait/compare/0.1.79...0.1.80)

---
updated-dependencies:
- dependency-name: async-trait
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-12 07:35:12 +00:00
5e16f3149f feat: guest metrics support (#46)
* feat: initial support for idm send in daemon

* feat: implement IdmClient backend support

* feat: daemon idm now uses IdmClient

* fix: implement channel destruction propagation

* feat: implement request response idm system

* feat: implement metrics support

* proto: move metrics into GuestMetrics for reusability

* fix: log level of guest agent was trace

* feat: metrics tree with process information
2024-04-12 07:34:46 +00:00
ec9060d872 build(deps): bump anyhow from 1.0.81 to 1.0.82 (#42)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.81 to 1.0.82.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.81...1.0.82)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-10 06:19:10 +00:00
6050e99aa7 chore: release (#39)
Co-authored-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
2024-04-09 11:47:58 +00:00
7cfdb27d23 chore(workflows): fix release asset assembly for alpine/debian packages (#40) 2024-04-09 04:45:36 -07:00
81 changed files with 3884 additions and 1108 deletions

View File

@ -6,6 +6,25 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.0.9](https://github.com/edera-dev/krata/compare/v0.0.8...v0.0.9) - 2024-04-15
### Added
- oci compliance work ([#85](https://github.com/edera-dev/krata/pull/85))
- oci packer can now use mksquashfs if available ([#70](https://github.com/edera-dev/krata/pull/70))
- basic kratactl top command ([#72](https://github.com/edera-dev/krata/pull/72))
- idm snooping ([#71](https://github.com/edera-dev/krata/pull/71))
- implement oci image progress ([#64](https://github.com/edera-dev/krata/pull/64))
- guest metrics support ([#46](https://github.com/edera-dev/krata/pull/46))
### Other
- init: default to xterm if TERM is not set ([#52](https://github.com/edera-dev/krata/pull/52))
- update Cargo.toml dependencies
## [0.0.8](https://github.com/edera-dev/krata/compare/v0.0.7...v0.0.8) - 2024-04-09
### Other
- update Cargo.lock dependencies
## [0.0.7](https://github.com/edera-dev/krata/compare/v0.0.6...v0.0.7) - 2024-04-09
### Other

347
Cargo.lock generated
View File

@ -17,6 +17,18 @@ version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
[[package]]
name = "ahash"
version = "0.8.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e89da841a80418a9b391ebaea17f5c112ffaaa96f621d2c285b5174da76b9011"
dependencies = [
"cfg-if",
"once_cell",
"version_check",
"zerocopy",
]
[[package]]
name = "aho-corasick"
version = "1.1.3"
@ -26,6 +38,12 @@ dependencies = [
"memchr",
]
[[package]]
name = "allocator-api2"
version = "0.2.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c6cb57a04249c6480766f7f7cef5467412af1490f8d1e243141daddada3264f"
[[package]]
name = "anstream"
version = "0.6.13"
@ -76,9 +94,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.81"
version = "1.0.82"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0952808a6c2afd1aa8947271f3a60f1a6763c7b912d210184c5149b5cf147247"
checksum = "f538837af36e6f6a9be0faa67f9a314f8119e4e4b5867c6ab40ed60360142519"
[[package]]
name = "arrayvec"
@ -128,9 +146,9 @@ dependencies = [
[[package]]
name = "async-trait"
version = "0.1.79"
version = "0.1.80"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a507401cad91ec6a857ed5513a2073c82a9b9048762b885bb98655b306964681"
checksum = "c6fa2087f2753a7da8cc1c0dbfcf89579dd57458e36769de5ac750b4671737ca"
dependencies = [
"proc-macro2",
"quote",
@ -302,6 +320,21 @@ version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "514de17de45fdb8dc022b1a7975556c53c86f9f0aa5f534b98977b171857c2c9"
[[package]]
name = "cassowary"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df8670b8c7b9dae1793364eafadf7239c40d669904660c5960d74cfd80b46a53"
[[package]]
name = "castaway"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a17ed5635fc8536268e5d4de1e22e81ac34419e5f052d4d51f4e01dcc263fcc"
dependencies = [
"rustversion",
]
[[package]]
name = "cc"
version = "1.0.90"
@ -421,6 +454,38 @@ dependencies = [
"unicode-width",
]
[[package]]
name = "compact_str"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f86b9c4c00838774a6d902ef931eff7470720c51d90c2e32cfe15dc304737b3f"
dependencies = [
"castaway",
"cfg-if",
"itoa",
"ryu",
"static_assertions",
]
[[package]]
name = "console"
version = "0.15.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0e1f83fc076bd6dd27517eacdf25fef6c4dfe5f1d7448bafaaf3a26f13b5e4eb"
dependencies = [
"encode_unicode",
"lazy_static",
"libc",
"unicode-width",
"windows-sys 0.52.0",
]
[[package]]
name = "core-foundation-sys"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06ea2b9bc92be3c2baa9334a323ebca2d6f074ff852cd1d7b11064035cd3868f"
[[package]]
name = "cpufeatures"
version = "0.2.12"
@ -439,6 +504,31 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "crossbeam-deque"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613f8cc01fe9cf1a3eb3d7f488fd2fa8388403e97039e2f73692932e291a770d"
dependencies = [
"crossbeam-epoch",
"crossbeam-utils",
]
[[package]]
name = "crossbeam-epoch"
version = "0.9.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b82ac4a3c2ca9c3460964f020e1402edd5753411d7737aa39c3714ad1b5420e"
dependencies = [
"crossbeam-utils",
]
[[package]]
name = "crossbeam-utils"
version = "0.8.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "248e3bacc7dc6baa3b21e405ee045c3047101a49145e7e9eca583ab4c2ca5345"
[[package]]
name = "crossterm"
version = "0.27.0"
@ -447,6 +537,7 @@ checksum = "f476fe445d41c9e991fd07515a6f463074b782242ccf4a5b7b1d1012e70824df"
dependencies = [
"bitflags 2.5.0",
"crossterm_winapi",
"futures-core",
"libc",
"mio",
"parking_lot",
@ -662,6 +753,12 @@ version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4445909572dbd556c457c849c4ca58623d84b27c8fff1e74b0b4227d8b90d17b"
[[package]]
name = "encode_unicode"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a357d28ed41a50f9c765dbfe56cbc04a64e53e5fc58ba79fbc34c10ef3df831f"
[[package]]
name = "env_filter"
version = "0.1.0"
@ -710,6 +807,17 @@ dependencies = [
"arrayvec",
]
[[package]]
name = "fancy-duration"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b3ae60718ae501dca9d27fd0e322683c86a95a1a01fac1807aa2f9b035cc0882"
dependencies = [
"anyhow",
"lazy_static",
"regex",
]
[[package]]
name = "fastrand"
version = "2.0.2"
@ -938,6 +1046,10 @@ name = "hashbrown"
version = "0.14.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "290f1a1d9242c78d09ce40a5e87e7554ee637af1351968159f4952f028f75604"
dependencies = [
"ahash",
"allocator-api2",
]
[[package]]
name = "heapless"
@ -1041,6 +1153,12 @@ version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
[[package]]
name = "human_bytes"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91f255a4535024abf7640cb288260811fc14794f62b063652ed349f9a6c2348e"
[[package]]
name = "humantime"
version = "2.1.0"
@ -1175,6 +1293,34 @@ dependencies = [
"hashbrown 0.14.3",
]
[[package]]
name = "indicatif"
version = "0.17.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "763a5a8f45087d6bcea4222e7b72c291a054edf80e4ef6efd2a4979878c7bea3"
dependencies = [
"console",
"instant",
"number_prefix",
"portable-atomic",
"unicode-width",
]
[[package]]
name = "indoc"
version = "2.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b248f5224d1d606005e02c97f5aa4e88eeb230488bcc03bc9ca4d7991399f2b5"
[[package]]
name = "instant"
version = "0.1.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a5bbe824c507c5da5956355e86a746d82e0e1464f65d862cc5e71da70e94b2c"
dependencies = [
"cfg-if",
]
[[package]]
name = "ipnet"
version = "2.9.0"
@ -1225,9 +1371,10 @@ dependencies = [
[[package]]
name = "krata"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"anyhow",
"async-trait",
"bytes",
"libc",
"log",
@ -1237,6 +1384,8 @@ dependencies = [
"prost-build",
"prost-reflect",
"prost-reflect-build",
"prost-types",
"scopeguard",
"serde",
"tokio",
"tokio-stream",
@ -1259,7 +1408,7 @@ dependencies = [
[[package]]
name = "krata-ctl"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"anyhow",
"async-stream",
@ -1268,11 +1417,17 @@ dependencies = [
"crossterm",
"ctrlc",
"env_logger",
"fancy-duration",
"human_bytes",
"indicatif",
"krata",
"log",
"prost-reflect",
"prost-types",
"ratatui",
"serde_json",
"serde_yaml",
"termtree",
"tokio",
"tokio-stream",
"tonic",
@ -1281,7 +1436,7 @@ dependencies = [
[[package]]
name = "krata-daemon"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"anyhow",
"async-stream",
@ -1292,6 +1447,7 @@ dependencies = [
"env_logger",
"futures",
"krata",
"krata-oci",
"krata-runtime",
"log",
"prost",
@ -1305,7 +1461,7 @@ dependencies = [
[[package]]
name = "krata-guest"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"anyhow",
"cgroups-rs",
@ -1323,13 +1479,13 @@ dependencies = [
"serde",
"serde_json",
"sys-mount",
"sysinfo",
"tokio",
"walkdir",
]
[[package]]
name = "krata-network"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"anyhow",
"async-trait",
@ -1353,7 +1509,7 @@ dependencies = [
[[package]]
name = "krata-oci"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"anyhow",
"async-compression",
@ -1361,11 +1517,13 @@ dependencies = [
"backhand",
"bytes",
"env_logger",
"indexmap 2.2.6",
"krata-tokio-tar",
"log",
"oci-spec",
"path-clean",
"reqwest",
"scopeguard",
"serde",
"serde_json",
"sha256",
@ -1378,7 +1536,7 @@ dependencies = [
[[package]]
name = "krata-runtime"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"anyhow",
"backhand",
@ -1415,7 +1573,7 @@ dependencies = [
[[package]]
name = "krata-xencall"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"env_logger",
"libc",
@ -1428,7 +1586,7 @@ dependencies = [
[[package]]
name = "krata-xenclient"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"async-trait",
"elf",
@ -1449,7 +1607,7 @@ dependencies = [
[[package]]
name = "krata-xenevtchn"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"libc",
"log",
@ -1460,7 +1618,7 @@ dependencies = [
[[package]]
name = "krata-xengnt"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"libc",
"nix 0.28.0",
@ -1469,7 +1627,7 @@ dependencies = [
[[package]]
name = "krata-xenstore"
version = "0.0.7"
version = "0.0.9"
dependencies = [
"byteorder",
"env_logger",
@ -1540,6 +1698,15 @@ dependencies = [
"libc",
]
[[package]]
name = "lru"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3262e75e648fce39813cb56ac41f3c3e3f65217ebf3844d818d1f9398cfb0dc"
dependencies = [
"hashbrown 0.14.3",
]
[[package]]
name = "lzma-sys"
version = "0.1.20"
@ -1718,6 +1885,15 @@ dependencies = [
"minimal-lexical",
]
[[package]]
name = "ntapi"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8a3895c6391c39d7fe7ebc444a87eb2991b2a0bc718fdabd071eec617fc68e4"
dependencies = [
"winapi",
]
[[package]]
name = "num-traits"
version = "0.2.18"
@ -1737,6 +1913,12 @@ dependencies = [
"libc",
]
[[package]]
name = "number_prefix"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "830b246a0e5f20af87141b25c173cd1b609bd7779a4617d6ec582abaf90870f3"
[[package]]
name = "object"
version = "0.32.2"
@ -1881,6 +2063,12 @@ version = "0.3.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d231b230927b5e4ad203db57bbcbee2802f6bce620b1e4a9024a07d94e2907ec"
[[package]]
name = "portable-atomic"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7170ef9988bc169ba16dd36a7fa041e5c4cbeb6a35b76d4c03daded371eae7c0"
[[package]]
name = "ppv-lite86"
version = "0.2.17"
@ -2074,6 +2262,46 @@ dependencies = [
"getrandom",
]
[[package]]
name = "ratatui"
version = "0.26.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bcb12f8fbf6c62614b0d56eb352af54f6a22410c3b079eb53ee93c7b97dd31d8"
dependencies = [
"bitflags 2.5.0",
"cassowary",
"compact_str",
"crossterm",
"indoc",
"itertools",
"lru",
"paste",
"stability",
"strum",
"unicode-segmentation",
"unicode-width",
]
[[package]]
name = "rayon"
version = "1.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b418a60154510ca1a002a752ca9714984e21e4241e804d32555251faf8b78ffa"
dependencies = [
"either",
"rayon-core",
]
[[package]]
name = "rayon-core"
version = "1.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1465873a3dfdaa8ae7cb14b4383657caab0b3e8a0aa9ae8e04b044854c8dfce2"
dependencies = [
"crossbeam-deque",
"crossbeam-utils",
]
[[package]]
name = "redb"
version = "2.0.0"
@ -2487,12 +2715,28 @@ version = "0.9.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67"
[[package]]
name = "stability"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ebd1b177894da2a2d9120208c3386066af06a488255caabc5de8ddca22dbc3ce"
dependencies = [
"quote",
"syn 1.0.109",
]
[[package]]
name = "stable_deref_trait"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8f112729512f8e442d81f95a8a7ddf2b7c6b8a1a6f509a95864142b30cab2d3"
[[package]]
name = "static_assertions"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f"
[[package]]
name = "strsim"
version = "0.10.0"
@ -2510,6 +2754,9 @@ name = "strum"
version = "0.26.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d8cec3501a5194c432b2b7976db6b7d10ec95c253208b45f83f7136aa985e29"
dependencies = [
"strum_macros",
]
[[package]]
name = "strum_macros"
@ -2571,6 +2818,21 @@ dependencies = [
"tracing",
]
[[package]]
name = "sysinfo"
version = "0.30.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26d7c217777061d5a2d652aea771fb9ba98b6dade657204b08c4b9604d11555b"
dependencies = [
"cfg-if",
"core-foundation-sys",
"libc",
"ntapi",
"once_cell",
"rayon",
"windows",
]
[[package]]
name = "tap"
version = "1.0.1"
@ -2589,6 +2851,12 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "termtree"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3369f5ac52d5eb6ab48c6b4ffdc8efbcad6b89c765749064ba298f2c68a16a76"
[[package]]
name = "thiserror"
version = "1.0.58"
@ -2880,6 +3148,12 @@ dependencies = [
"tinyvec",
]
[[package]]
name = "unicode-segmentation"
version = "1.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4c87d22b6e3f4a18d4d40ef354e97c90fcb14dd91d7dc0aa9d8a1172ebf7202"
[[package]]
name = "unicode-width"
version = "0.1.11"
@ -3071,6 +3345,25 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows"
version = "0.52.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e48a53791691ab099e5e2ad123536d0fff50652600abaf43bbf952894110d0be"
dependencies = [
"windows-core",
"windows-targets 0.52.4",
]
[[package]]
name = "windows-core"
version = "0.52.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "33ab640c8d7e35bf8ba19b884ba838ceb4fba93a4e8c65a9059d08afcfc683d9"
dependencies = [
"windows-targets 0.52.4",
]
[[package]]
name = "windows-sys"
version = "0.48.0"
@ -3251,6 +3544,26 @@ dependencies = [
"lzma-sys",
]
[[package]]
name = "zerocopy"
version = "0.7.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "74d4d3961e53fa4c9a25a8637fc2bfaf2595b3d3ae34875568a5cf64787716be"
dependencies = [
"zerocopy-derive",
]
[[package]]
name = "zerocopy-derive"
version = "0.7.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ce1b18ccd8e73a9321186f97e46f9f04b778851177567b1975109d26a08d2a6"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.57",
]
[[package]]
name = "zeroize"
version = "1.7.0"

View File

@ -16,7 +16,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.0.7"
version = "0.0.9"
homepage = "https://krata.dev"
license = "Apache-2.0"
repository = "https://github.com/edera-dev/krata"
@ -26,7 +26,7 @@ anyhow = "1.0"
arrayvec = "0.7.4"
async-compression = "0.4.8"
async-stream = "0.3.5"
async-trait = "0.1.77"
async-trait = "0.1.80"
backhand = "0.15.0"
byteorder = "1"
bytes = "1.5.0"
@ -38,8 +38,12 @@ ctrlc = "3.4.4"
elf = "0.7.4"
env_logger = "0.11.0"
etherparse = "0.14.3"
fancy-duration = "0.9.2"
flate2 = "1.0"
futures = "0.3.30"
human_bytes = "0.4"
indexmap = "2.2.6"
indicatif = "0.17.8"
ipnetwork = "0.20.0"
libc = "0.2"
log = "0.4.20"
@ -55,15 +59,20 @@ path-clean = "1.0.1"
prost = "0.12.4"
prost-build = "0.12.4"
prost-reflect-build = "0.13.0"
prost-types = "0.12.4"
rand = "0.8.5"
ratatui = "0.26.1"
redb = "2.0.0"
rtnetlink = "0.14.1"
scopeguard = "1.2.0"
serde_json = "1.0.113"
serde_yaml = "0.9"
sha256 = "1.5.0"
signal-hook = "0.3.17"
slice-copy = "0.3.0"
smoltcp = "0.11.0"
sysinfo = "0.30.10"
termtree = "0.4.1"
thiserror = "1.0"
tokio-tun = "0.11.4"
tonic-build = "0.11.0"
@ -109,3 +118,7 @@ features = ["tls"]
[workspace.dependencies.uuid]
version = "1.6.1"
features = ["v4"]
[profile.release]
lto = "fat"
strip = "symbols"

15
DEV.md
View File

@ -28,10 +28,21 @@ it's corresponding code path from the above table.
1. Install the specified Debian version on a x86_64 host _capable_ of KVM (NOTE: KVM is not used, Xen is a type-1 hypervisor).
2. Install required packages: `apt install git xen-system-amd64 flex bison libelf-dev libssl-dev bc`
2. Install required packages:
```sh
$ apt install git xen-system-amd64 build-essential libclang-dev musl-tools flex bison libelf-dev libssl-dev bc protobuf-compiler libprotobuf-dev squashfs-tools erofs-utils
```
3. Install [rustup](https://rustup.rs) for managing a Rust environment.
Make sure to install the targets that you need for krata:
```sh
$ rustup target add x86_64-unknown-linux-gnu
$ rustup target add x86_64-unknown-linux-musl
```
4. Configure `/etc/default/grub.d/xen.cfg` to give krata guests some room:
```sh
@ -43,7 +54,7 @@ After changing the grub config, update grub: `update-grub`
Then reboot to boot the system as a Xen dom0.
You can validate that Xen is setup by running `xl info` and ensuring it returns useful information about the Xen hypervisor.
You can validate that Xen is setup by running `dmesg | grep "Hypervisor detected"` and ensuring it returns a line like `Hypervisor detected: Xen PV`, if that is missing, the host is not running under Xen.
5. Clone the krata source code:
```sh

2
FAQ.md
View File

@ -2,7 +2,7 @@
## How does krata currently work?
The krata hypervisor makes it possible to launch OCI containers on a Xen hypervisor without utilizing the Xen userspace tooling. krata contains just enough of the userspace of Xen (reimplemented in Rust) to start an x86_64 Xen Linux PV guest, and implements a Linux init process that can boot an OCI container. It does so by converting an OCI image into a squashfs file and packaging basic startup data in a bundle which the init container can read.
The krata hypervisor makes it possible to launch OCI containers on a Xen hypervisor without utilizing the Xen userspace tooling. krata contains just enough of the userspace of Xen (reimplemented in Rust) to start an x86_64 Xen Linux PV guest, and implements a Linux init process that can boot an OCI container. It does so by converting an OCI image into a squashfs/erofs file and packaging basic startup data in a bundle which the init container can read.
In addition, due to the desire to reduce dependence on the dom0 network, krata contains a networking daemon called kratanet. kratanet listens for krata guests to startup and launches a userspace networking environment. krata guests can access the dom0 networking stack via the proxynat layer that makes it possible to communicate over UDP, TCP, and ICMP (echo only) to the outside world. In addition, each krata guest is provided a "gateway" IP (both in IPv4 and IPv6) which utilizes smoltcp to provide a virtual host. That virtual host in the future could dial connections into the container to access container networking resources.

View File

@ -13,14 +13,20 @@ anyhow = { workspace = true }
async-stream = { workspace = true }
clap = { workspace = true }
comfy-table = { workspace = true }
crossterm = { workspace = true }
crossterm = { workspace = true, features = ["event-stream"] }
ctrlc = { workspace = true, features = ["termination"] }
env_logger = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" }
fancy-duration = { workspace = true }
human_bytes = { workspace = true }
indicatif = { workspace = true }
krata = { path = "../krata", version = "^0.0.9" }
log = { workspace = true }
prost-reflect = { workspace = true, features = ["serde"] }
prost-types = { workspace = true }
ratatui = { workspace = true }
serde_json = { workspace = true }
serde_yaml = { workspace = true }
termtree = { workspace = true }
tokio = { workspace = true }
tokio-stream = { workspace = true }
tonic = { workspace = true }

View File

@ -52,34 +52,31 @@ impl DestroyCommand {
async fn wait_guest_destroyed(id: &str, events: EventStream) -> Result<()> {
let mut stream = events.subscribe();
while let Ok(event) = stream.recv().await {
match event {
Event::GuestChanged(changed) => {
let Some(guest) = changed.guest else {
continue;
};
let Event::GuestChanged(changed) = event;
let Some(guest) = changed.guest else {
continue;
};
if guest.id != id {
continue;
}
if guest.id != id {
continue;
}
let Some(state) = guest.state else {
continue;
};
let Some(state) = guest.state else {
continue;
};
if let Some(ref error) = state.error_info {
if state.status() == GuestStatus::Failed {
error!("destroy failed: {}", error.message);
std::process::exit(1);
} else {
error!("guest error: {}", error.message);
}
}
if state.status() == GuestStatus::Destroyed {
std::process::exit(0);
}
if let Some(ref error) = state.error_info {
if state.status() == GuestStatus::Failed {
error!("destroy failed: {}", error.message);
std::process::exit(1);
} else {
error!("guest error: {}", error.message);
}
}
if state.status() == GuestStatus::Destroyed {
std::process::exit(0);
}
}
Ok(())
}

View File

@ -0,0 +1,74 @@
use anyhow::Result;
use clap::{Parser, ValueEnum};
use krata::{
events::EventStream,
v1::control::{control_service_client::ControlServiceClient, SnoopIdmReply, SnoopIdmRequest},
};
use tokio_stream::StreamExt;
use tonic::transport::Channel;
use crate::format::{kv2line, proto2dynamic, proto2kv};
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum IdmSnoopFormat {
Simple,
Jsonl,
KeyValue,
}
#[derive(Parser)]
#[command(about = "Snoop on the IDM bus")]
pub struct IdmSnoopCommand {
#[arg(short, long, default_value = "simple", help = "Output format")]
format: IdmSnoopFormat,
}
impl IdmSnoopCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
_events: EventStream,
) -> Result<()> {
let mut stream = client.snoop_idm(SnoopIdmRequest {}).await?.into_inner();
while let Some(reply) = stream.next().await {
let reply = reply?;
match self.format {
IdmSnoopFormat::Simple => {
self.print_simple(reply)?;
}
IdmSnoopFormat::Jsonl => {
let value = serde_json::to_value(proto2dynamic(reply)?)?;
let encoded = serde_json::to_string(&value)?;
println!("{}", encoded.trim());
}
IdmSnoopFormat::KeyValue => {
self.print_key_value(reply)?;
}
}
}
Ok(())
}
fn print_simple(&self, reply: SnoopIdmReply) -> Result<()> {
let from = reply.from;
let to = reply.to;
let Some(packet) = reply.packet else {
return Ok(());
};
let value = serde_json::to_value(proto2dynamic(packet)?)?;
let encoded = serde_json::to_string(&value)?;
println!("({} -> {}) {}", from, to, encoded);
Ok(())
}
fn print_key_value(&self, reply: SnoopIdmReply) -> Result<()> {
let kvs = proto2kv(reply)?;
println!("{}", kv2line(kvs));
Ok(())
}
}

View File

@ -6,12 +6,12 @@ use krata::{
events::EventStream,
v1::{
common::{
guest_image_spec::Image, GuestImageSpec, GuestOciImageSpec, GuestSpec, GuestStatus,
GuestTaskSpec, GuestTaskSpecEnvVar,
guest_image_spec::Image, GuestImageSpec, GuestOciImageFormat, GuestOciImageSpec,
GuestSpec, GuestStatus, GuestTaskSpec, GuestTaskSpecEnvVar,
},
control::{
control_service_client::ControlServiceClient, watch_events_reply::Event,
CreateGuestRequest,
CreateGuestRequest, PullImageRequest,
},
},
};
@ -19,11 +19,15 @@ use log::error;
use tokio::select;
use tonic::{transport::Channel, Request};
use crate::console::StdioConsoleStream;
use crate::{console::StdioConsoleStream, pull::pull_interactive_progress};
use super::pull::PullImageFormat;
#[derive(Parser)]
#[command(about = "Launch a new guest")]
pub struct LauchCommand {
#[arg(short = 'S', long, default_value = "squashfs", help = "Image format")]
image_format: PullImageFormat,
#[arg(short, long, help = "Name of the guest")]
name: Option<String>,
#[arg(
@ -70,11 +74,25 @@ impl LauchCommand {
mut client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
let response = client
.pull_image(PullImageRequest {
image: self.oci.clone(),
format: match self.image_format {
PullImageFormat::Squashfs => GuestOciImageFormat::Squashfs.into(),
PullImageFormat::Erofs => GuestOciImageFormat::Erofs.into(),
},
})
.await?;
let reply = pull_interactive_progress(response.into_inner()).await?;
let request = CreateGuestRequest {
spec: Some(GuestSpec {
name: self.name.unwrap_or_default(),
image: Some(GuestImageSpec {
image: Some(Image::Oci(GuestOciImageSpec { image: self.oci })),
image: Some(Image::Oci(GuestOciImageSpec {
digest: reply.digest,
format: reply.format,
})),
}),
vcpus: self.cpus,
mem: self.mem,

View File

@ -0,0 +1,83 @@
use anyhow::Result;
use clap::{Parser, ValueEnum};
use krata::{
events::EventStream,
v1::{
common::GuestMetricNode,
control::{control_service_client::ControlServiceClient, ReadGuestMetricsRequest},
},
};
use tonic::transport::Channel;
use crate::format::{kv2line, metrics_flat, metrics_tree, proto2dynamic};
use super::resolve_guest;
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum MetricsFormat {
Tree,
Json,
JsonPretty,
Yaml,
KeyValue,
}
#[derive(Parser)]
#[command(about = "Read metrics from the guest")]
pub struct MetricsCommand {
#[arg(short, long, default_value = "tree", help = "Output format")]
format: MetricsFormat,
#[arg(help = "Guest to read metrics for, either the name or the uuid")]
guest: String,
}
impl MetricsCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
_events: EventStream,
) -> Result<()> {
let guest_id: String = resolve_guest(&mut client, &self.guest).await?;
let root = client
.read_guest_metrics(ReadGuestMetricsRequest { guest_id })
.await?
.into_inner()
.root
.unwrap_or_default();
match self.format {
MetricsFormat::Tree => {
self.print_metrics_tree(root)?;
}
MetricsFormat::Json | MetricsFormat::JsonPretty | MetricsFormat::Yaml => {
let value = serde_json::to_value(proto2dynamic(root)?)?;
let encoded = if self.format == MetricsFormat::JsonPretty {
serde_json::to_string_pretty(&value)?
} else if self.format == MetricsFormat::Yaml {
serde_yaml::to_string(&value)?
} else {
serde_json::to_string(&value)?
};
println!("{}", encoded.trim());
}
MetricsFormat::KeyValue => {
self.print_key_value(root)?;
}
}
Ok(())
}
fn print_metrics_tree(&self, root: GuestMetricNode) -> Result<()> {
print!("{}", metrics_tree(root));
Ok(())
}
fn print_key_value(&self, metrics: GuestMetricNode) -> Result<()> {
let kvs = metrics_flat(metrics);
println!("{}", kv2line(kvs));
Ok(())
}
}

View File

@ -1,9 +1,13 @@
pub mod attach;
pub mod destroy;
pub mod idm_snoop;
pub mod launch;
pub mod list;
pub mod logs;
pub mod metrics;
pub mod pull;
pub mod resolve;
pub mod top;
pub mod watch;
use anyhow::{anyhow, Result};
@ -16,8 +20,9 @@ use krata::{
use tonic::{transport::Channel, Request};
use self::{
attach::AttachCommand, destroy::DestroyCommand, launch::LauchCommand, list::ListCommand,
logs::LogsCommand, resolve::ResolveCommand, watch::WatchCommand,
attach::AttachCommand, destroy::DestroyCommand, idm_snoop::IdmSnoopCommand,
launch::LauchCommand, list::ListCommand, logs::LogsCommand, metrics::MetricsCommand,
pull::PullCommand, resolve::ResolveCommand, top::TopCommand, watch::WatchCommand,
};
#[derive(Parser)]
@ -44,9 +49,13 @@ pub enum Commands {
Destroy(DestroyCommand),
List(ListCommand),
Attach(AttachCommand),
Pull(PullCommand),
Logs(LogsCommand),
Watch(WatchCommand),
Resolve(ResolveCommand),
Metrics(MetricsCommand),
IdmSnoop(IdmSnoopCommand),
Top(TopCommand),
}
impl ControlCommand {
@ -82,6 +91,22 @@ impl ControlCommand {
Commands::Resolve(resolve) => {
resolve.run(client).await?;
}
Commands::Metrics(metrics) => {
metrics.run(client, events).await?;
}
Commands::IdmSnoop(snoop) => {
snoop.run(client, events).await?;
}
Commands::Top(top) => {
top.run(client, events).await?;
}
Commands::Pull(pull) => {
pull.run(client).await?;
}
}
Ok(())
}

View File

@ -0,0 +1,42 @@
use anyhow::Result;
use clap::{Parser, ValueEnum};
use krata::v1::{
common::GuestOciImageFormat,
control::{control_service_client::ControlServiceClient, PullImageRequest},
};
use tonic::transport::Channel;
use crate::pull::pull_interactive_progress;
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
pub enum PullImageFormat {
Squashfs,
Erofs,
}
#[derive(Parser)]
#[command(about = "Pull an image into the cache")]
pub struct PullCommand {
#[arg(help = "Image name")]
image: String,
#[arg(short = 's', long, default_value = "squashfs", help = "Image format")]
image_format: PullImageFormat,
}
impl PullCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let response = client
.pull_image(PullImageRequest {
image: self.image.clone(),
format: match self.image_format {
PullImageFormat::Squashfs => GuestOciImageFormat::Squashfs.into(),
PullImageFormat::Erofs => GuestOciImageFormat::Erofs.into(),
},
})
.await?;
let reply = pull_interactive_progress(response.into_inner()).await?;
println!("{}", reply.digest);
Ok(())
}
}

215
crates/ctl/src/cli/top.rs Normal file
View File

@ -0,0 +1,215 @@
use anyhow::Result;
use clap::Parser;
use krata::{events::EventStream, v1::control::control_service_client::ControlServiceClient};
use std::{
io::{self, stdout, Stdout},
time::Duration,
};
use tokio::select;
use tokio_stream::StreamExt;
use tonic::transport::Channel;
use crossterm::{
event::{Event, KeyCode, KeyEvent, KeyEventKind},
execute,
terminal::*,
};
use ratatui::{
prelude::*,
symbols::border,
widgets::{
block::{Position, Title},
Block, Borders, Row, Table, TableState,
},
};
use crate::{
format::guest_status_text,
metrics::{
lookup_metric_value, MultiMetricCollector, MultiMetricCollectorHandle, MultiMetricState,
},
};
#[derive(Parser)]
#[command(about = "Dashboard for running guests")]
pub struct TopCommand {}
pub type Tui = Terminal<CrosstermBackend<Stdout>>;
impl TopCommand {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
let collector = MultiMetricCollector::new(client, events, Duration::from_millis(200))?;
let collector = collector.launch().await?;
let mut tui = TopCommand::init()?;
let mut app = TopApp {
metrics: MultiMetricState { guests: vec![] },
exit: false,
table: TableState::new(),
};
app.run(collector, &mut tui).await?;
TopCommand::restore()?;
Ok(())
}
pub fn init() -> io::Result<Tui> {
execute!(stdout(), EnterAlternateScreen)?;
enable_raw_mode()?;
Terminal::new(CrosstermBackend::new(stdout()))
}
pub fn restore() -> io::Result<()> {
execute!(stdout(), LeaveAlternateScreen)?;
disable_raw_mode()?;
Ok(())
}
}
pub struct TopApp {
table: TableState,
metrics: MultiMetricState,
exit: bool,
}
impl TopApp {
pub async fn run(
&mut self,
mut collector: MultiMetricCollectorHandle,
terminal: &mut Tui,
) -> Result<()> {
let mut events = crossterm::event::EventStream::new();
while !self.exit {
terminal.draw(|frame| self.render_frame(frame))?;
select! {
x = collector.receiver.recv() => match x {
Some(state) => {
self.metrics = state;
},
None => {
break;
}
},
x = events.next() => match x {
Some(event) => {
let event = event?;
self.handle_event(event)?;
},
None => {
break;
}
}
};
}
Ok(())
}
fn render_frame(&mut self, frame: &mut Frame) {
frame.render_widget(self, frame.size());
}
fn handle_event(&mut self, event: Event) -> io::Result<()> {
match event {
Event::Key(key_event) if key_event.kind == KeyEventKind::Press => {
self.handle_key_event(key_event)
}
_ => {}
};
Ok(())
}
fn exit(&mut self) {
self.exit = true;
}
fn handle_key_event(&mut self, key_event: KeyEvent) {
if let KeyCode::Char('q') = key_event.code {
self.exit()
}
}
}
impl Widget for &mut TopApp {
fn render(self, area: Rect, buf: &mut Buffer) {
let title = Title::from(" krata hypervisor ".bold());
let instructions = Title::from(vec![" Quit ".into(), "<Q> ".blue().bold()]);
let block = Block::default()
.title(title.alignment(Alignment::Center))
.title(
instructions
.alignment(Alignment::Center)
.position(Position::Bottom),
)
.borders(Borders::ALL)
.border_set(border::THICK);
let mut rows = vec![];
for ms in &self.metrics.guests {
let Some(ref spec) = ms.guest.spec else {
continue;
};
let Some(ref state) = ms.guest.state else {
continue;
};
let memory_total = ms
.root
.as_ref()
.and_then(|root| lookup_metric_value(root, "system/memory/total"));
let memory_used = ms
.root
.as_ref()
.and_then(|root| lookup_metric_value(root, "system/memory/used"));
let memory_free = ms
.root
.as_ref()
.and_then(|root| lookup_metric_value(root, "system/memory/free"));
let row = Row::new(vec![
spec.name.clone(),
ms.guest.id.clone(),
guest_status_text(state.status()),
memory_total.unwrap_or_default(),
memory_used.unwrap_or_default(),
memory_free.unwrap_or_default(),
]);
rows.push(row);
}
let widths = [
Constraint::Min(8),
Constraint::Min(8),
Constraint::Min(8),
Constraint::Min(8),
Constraint::Min(8),
Constraint::Min(8),
];
let table = Table::new(rows, widths)
.header(
Row::new(vec![
"name",
"id",
"status",
"total memory",
"used memory",
"free memory",
])
.style(Style::new().bold())
.bottom_margin(1),
)
.column_spacing(1)
.block(block);
StatefulWidget::render(table, area, buf, &mut self.table);
}
}

View File

@ -28,12 +28,10 @@ impl WatchCommand {
let mut stream = events.subscribe();
loop {
let event = stream.recv().await?;
match event {
Event::GuestChanged(changed) => {
let guest = changed.guest.clone();
self.print_event("guest.changed", changed, guest)?;
}
}
let Event::GuestChanged(changed) = event;
let guest = changed.guest.clone();
self.print_event("guest.changed", changed, guest)?;
}
}

View File

@ -69,29 +69,26 @@ impl StdioConsoleStream {
Ok(tokio::task::spawn(async move {
let mut stream = events.subscribe();
while let Ok(event) = stream.recv().await {
match event {
Event::GuestChanged(changed) => {
let Some(guest) = changed.guest else {
continue;
};
let Event::GuestChanged(changed) = event;
let Some(guest) = changed.guest else {
continue;
};
let Some(state) = guest.state else {
continue;
};
let Some(state) = guest.state else {
continue;
};
if guest.id != id {
continue;
}
if guest.id != id {
continue;
}
if let Some(exit_info) = state.exit_info {
return Some(exit_info.code);
}
if let Some(exit_info) = state.exit_info {
return Some(exit_info.code);
}
let status = state.status();
if status == GuestStatus::Destroying || status == GuestStatus::Destroyed {
return Some(10);
}
}
let status = state.status();
if status == GuestStatus::Destroying || status == GuestStatus::Destroyed {
return Some(10);
}
}
None

View File

@ -1,8 +1,12 @@
use std::collections::HashMap;
use std::{collections::HashMap, time::Duration};
use anyhow::Result;
use krata::v1::common::{Guest, GuestStatus};
use prost_reflect::{DynamicMessage, ReflectMessage, Value};
use fancy_duration::FancyDuration;
use human_bytes::human_bytes;
use krata::v1::common::{Guest, GuestMetricFormat, GuestMetricNode, GuestStatus};
use prost_reflect::{DynamicMessage, FieldDescriptor, ReflectMessage, Value as ReflectValue};
use prost_types::Value;
use termtree::Tree;
pub fn proto2dynamic(proto: impl ReflectMessage) -> Result<DynamicMessage> {
Ok(DynamicMessage::decode(
@ -15,38 +19,56 @@ pub fn proto2kv(proto: impl ReflectMessage) -> Result<HashMap<String, String>> {
let message = proto2dynamic(proto)?;
let mut map = HashMap::new();
fn crawl(prefix: &str, map: &mut HashMap<String, String>, message: &DynamicMessage) {
for (field, value) in message.fields() {
let path = if prefix.is_empty() {
field.name().to_string()
} else {
format!("{}.{}", prefix, field.name())
};
match value {
Value::Message(child) => {
crawl(&path, map, child);
fn crawl(
prefix: String,
field: Option<&FieldDescriptor>,
map: &mut HashMap<String, String>,
value: &ReflectValue,
) {
match value {
ReflectValue::Message(child) => {
for (field, field_value) in child.fields() {
let path = if prefix.is_empty() {
field.json_name().to_string()
} else {
format!("{}.{}", prefix, field.json_name())
};
crawl(path, Some(&field), map, field_value);
}
}
Value::EnumNumber(number) => {
if let Some(e) = field.kind().as_enum() {
ReflectValue::EnumNumber(number) => {
if let Some(kind) = field.map(|x| x.kind()) {
if let Some(e) = kind.as_enum() {
if let Some(value) = e.get_value(*number) {
map.insert(path, value.name().to_string());
map.insert(prefix, value.name().to_string());
}
}
}
}
Value::String(value) => {
map.insert(path, value.clone());
}
ReflectValue::String(value) => {
map.insert(prefix.to_string(), value.clone());
}
_ => {
map.insert(path, value.to_string());
ReflectValue::List(value) => {
for (x, value) in value.iter().enumerate() {
crawl(format!("{}.{}", prefix, x), field, map, value);
}
}
_ => {
map.insert(prefix.to_string(), value.to_string());
}
}
}
crawl("", &mut map, &message);
crawl(
"".to_string(),
None,
&mut map,
&ReflectValue::Message(message),
);
Ok(map)
}
@ -85,3 +107,63 @@ pub fn guest_simple_line(guest: &Guest) -> String {
let ipv6 = network.map(|x| x.guest_ipv6.as_str()).unwrap_or("");
format!("{}\t{}\t{}\t{}\t{}", guest.id, state, name, ipv4, ipv6)
}
fn metrics_value_string(value: Value) -> String {
proto2dynamic(value)
.map(|x| serde_json::to_string(&x).ok())
.ok()
.flatten()
.unwrap_or_default()
}
fn metrics_value_numeric(value: Value) -> f64 {
let string = metrics_value_string(value);
string.parse::<f64>().ok().unwrap_or(f64::NAN)
}
pub fn metrics_value_pretty(value: Value, format: GuestMetricFormat) -> String {
match format {
GuestMetricFormat::Bytes => human_bytes(metrics_value_numeric(value)),
GuestMetricFormat::Integer => (metrics_value_numeric(value) as u64).to_string(),
GuestMetricFormat::DurationSeconds => {
FancyDuration(Duration::from_secs_f64(metrics_value_numeric(value))).to_string()
}
_ => metrics_value_string(value),
}
}
fn metrics_flat_internal(prefix: &str, node: GuestMetricNode, map: &mut HashMap<String, String>) {
if let Some(value) = node.value {
map.insert(prefix.to_string(), metrics_value_string(value));
}
for child in node.children {
let path = if prefix.is_empty() {
child.name.to_string()
} else {
format!("{}.{}", prefix, child.name)
};
metrics_flat_internal(&path, child, map);
}
}
pub fn metrics_flat(root: GuestMetricNode) -> HashMap<String, String> {
let mut map = HashMap::new();
metrics_flat_internal("", root, &mut map);
map
}
pub fn metrics_tree(node: GuestMetricNode) -> Tree<String> {
let mut name = node.name.to_string();
let format = node.format();
if let Some(value) = node.value {
let value_string = metrics_value_pretty(value, format);
name.push_str(&format!(": {}", value_string));
}
let mut tree = Tree::new(name);
for child in node.children {
tree.push(metrics_tree(child));
}
tree
}

View File

@ -1,3 +1,5 @@
pub mod cli;
pub mod console;
pub mod format;
pub mod metrics;
pub mod pull;

158
crates/ctl/src/metrics.rs Normal file
View File

@ -0,0 +1,158 @@
use anyhow::Result;
use krata::{
events::EventStream,
v1::{
common::{Guest, GuestMetricNode, GuestStatus},
control::{
control_service_client::ControlServiceClient, watch_events_reply::Event,
ListGuestsRequest, ReadGuestMetricsRequest,
},
},
};
use log::error;
use std::time::Duration;
use tokio::{
select,
sync::mpsc::{channel, Receiver, Sender},
task::JoinHandle,
time::{sleep, timeout},
};
use tonic::transport::Channel;
use crate::format::metrics_value_pretty;
pub struct MetricState {
pub guest: Guest,
pub root: Option<GuestMetricNode>,
}
pub struct MultiMetricState {
pub guests: Vec<MetricState>,
}
pub struct MultiMetricCollector {
client: ControlServiceClient<Channel>,
events: EventStream,
period: Duration,
}
pub struct MultiMetricCollectorHandle {
pub receiver: Receiver<MultiMetricState>,
task: JoinHandle<()>,
}
impl Drop for MultiMetricCollectorHandle {
fn drop(&mut self) {
self.task.abort();
}
}
impl MultiMetricCollector {
pub fn new(
client: ControlServiceClient<Channel>,
events: EventStream,
period: Duration,
) -> Result<MultiMetricCollector> {
Ok(MultiMetricCollector {
client,
events,
period,
})
}
pub async fn launch(mut self) -> Result<MultiMetricCollectorHandle> {
let (sender, receiver) = channel::<MultiMetricState>(100);
let task = tokio::task::spawn(async move {
if let Err(error) = self.process(sender).await {
error!("failed to process multi metric collector: {}", error);
}
});
Ok(MultiMetricCollectorHandle { receiver, task })
}
pub async fn process(&mut self, sender: Sender<MultiMetricState>) -> Result<()> {
let mut events = self.events.subscribe();
let mut guests: Vec<Guest> = self
.client
.list_guests(ListGuestsRequest {})
.await?
.into_inner()
.guests;
loop {
let collect = select! {
x = events.recv() => match x {
Ok(event) => {
let Event::GuestChanged(changed) = event;
let Some(guest) = changed.guest else {
continue;
};
let Some(ref state) = guest.state else {
continue;
};
guests.retain(|x| x.id != guest.id);
if state.status() != GuestStatus::Destroying {
guests.push(guest);
}
false
},
Err(error) => {
return Err(error.into());
}
},
_ = sleep(self.period) => {
true
}
};
if !collect {
continue;
}
let mut metrics = Vec::new();
for guest in &guests {
let Some(ref state) = guest.state else {
continue;
};
if state.status() != GuestStatus::Started {
continue;
}
let root = timeout(
Duration::from_secs(5),
self.client.read_guest_metrics(ReadGuestMetricsRequest {
guest_id: guest.id.clone(),
}),
)
.await
.ok()
.and_then(|x| x.ok())
.map(|x| x.into_inner())
.and_then(|x| x.root);
metrics.push(MetricState {
guest: guest.clone(),
root,
});
}
sender.send(MultiMetricState { guests: metrics }).await?;
}
}
}
pub fn lookup<'a>(node: &'a GuestMetricNode, path: &str) -> Option<&'a GuestMetricNode> {
let Some((what, b)) = path.split_once('/') else {
return node.children.iter().find(|x| x.name == path);
};
let next = node.children.iter().find(|x| x.name == what)?;
return lookup(next, b);
}
pub fn lookup_metric_value(node: &GuestMetricNode, path: &str) -> Option<String> {
lookup(node, path).and_then(|x| {
x.value
.as_ref()
.map(|v| metrics_value_pretty(v.clone(), x.format()))
})
}

118
crates/ctl/src/pull.rs Normal file
View File

@ -0,0 +1,118 @@
use std::collections::HashMap;
use anyhow::{anyhow, Result};
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use krata::v1::control::{PullImageProgressLayerPhase, PullImageProgressPhase, PullImageReply};
use tokio_stream::StreamExt;
use tonic::Streaming;
pub async fn pull_interactive_progress(
mut stream: Streaming<PullImageReply>,
) -> Result<PullImageReply> {
let mut multi_progress: Option<(MultiProgress, HashMap<String, ProgressBar>)> = None;
while let Some(reply) = stream.next().await {
let reply = reply?;
if reply.progress.is_none() && !reply.digest.is_empty() {
return Ok(reply);
}
let Some(oci) = reply.progress else {
continue;
};
if multi_progress.is_none() {
multi_progress = Some((MultiProgress::new(), HashMap::new()));
}
let Some((multi_progress, progresses)) = multi_progress.as_mut() else {
continue;
};
match oci.phase() {
PullImageProgressPhase::Resolved
| PullImageProgressPhase::ConfigAcquire
| PullImageProgressPhase::LayerAcquire => {
if progresses.is_empty() && !oci.layers.is_empty() {
for layer in &oci.layers {
let bar = ProgressBar::new(layer.total);
bar.set_style(ProgressStyle::with_template("{msg} {bar}").unwrap());
progresses.insert(layer.id.clone(), bar.clone());
multi_progress.add(bar);
}
}
for layer in oci.layers {
let Some(progress) = progresses.get_mut(&layer.id) else {
continue;
};
let phase = match layer.phase() {
PullImageProgressLayerPhase::Waiting => "waiting",
PullImageProgressLayerPhase::Downloading => "downloading",
PullImageProgressLayerPhase::Downloaded => "downloaded",
PullImageProgressLayerPhase::Extracting => "extracting",
PullImageProgressLayerPhase::Extracted => "extracted",
_ => "unknown",
};
let simple = if let Some((_, hash)) = layer.id.split_once(':') {
hash
} else {
"unknown"
};
let simple = if simple.len() > 10 {
&simple[0..10]
} else {
simple
};
let message = format!(
"{:width$} {:phwidth$}",
simple,
phase,
width = 10,
phwidth = 11
);
if message != progress.message() {
progress.set_message(message);
}
progress.update(|state| {
state.set_len(layer.total);
state.set_pos(layer.value);
});
}
}
PullImageProgressPhase::Packing => {
for (key, bar) in &mut *progresses {
if key == "packing" {
continue;
}
bar.finish_and_clear();
multi_progress.remove(bar);
}
progresses.retain(|k, _| k == "packing");
if progresses.is_empty() {
let progress = ProgressBar::new(100);
progress.set_message("packing ");
progress.set_style(ProgressStyle::with_template("{msg} {bar}").unwrap());
progresses.insert("packing".to_string(), progress);
}
let Some(progress) = progresses.get("packing") else {
continue;
};
progress.update(|state| {
state.set_len(oci.total);
state.set_pos(oci.value);
});
}
_ => {}
}
}
Err(anyhow!("never received final reply for image pull"))
}

View File

@ -17,8 +17,9 @@ circular-buffer = { workspace = true }
clap = { workspace = true }
env_logger = { workspace = true }
futures = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" }
krata-runtime = { path = "../runtime", version = "^0.0.7" }
krata = { path = "../krata", version = "^0.0.9" }
krata-oci = { path = "../oci", version = "^0.0.9" }
krata-runtime = { path = "../runtime", version = "^0.0.9" }
log = { workspace = true }
prost = { workspace = true }
redb = { workspace = true }

View File

@ -3,7 +3,6 @@ use clap::Parser;
use env_logger::Env;
use krata::dial::ControlDialAddress;
use kratad::Daemon;
use kratart::Runtime;
use log::LevelFilter;
use std::{
str::FromStr,
@ -27,8 +26,8 @@ async fn main() -> Result<()> {
let args = DaemonCommand::parse();
let addr = ControlDialAddress::from_str(&args.listen)?;
let runtime = Runtime::new(args.store.clone()).await?;
let mut daemon = Daemon::new(args.store.clone(), runtime).await?;
let mut daemon = Daemon::new(args.store.clone()).await?;
daemon.listen(addr).await?;
Ok(())
}

View File

@ -79,7 +79,7 @@ impl Drop for DaemonConsoleHandle {
pub struct DaemonConsole {
listeners: ListenerMap,
buffers: BufferMap,
receiver: Receiver<(u32, Vec<u8>)>,
receiver: Receiver<(u32, Option<Vec<u8>>)>,
sender: Sender<(u32, Vec<u8>)>,
task: JoinHandle<()>,
}
@ -124,16 +124,22 @@ impl DaemonConsole {
};
let mut buffers = self.buffers.lock().await;
let buffer = buffers
.entry(domid)
.or_insert_with_key(|_| RawConsoleBuffer::boxed());
buffer.extend_from_slice(&data);
drop(buffers);
let mut listeners = self.listeners.lock().await;
if let Some(senders) = listeners.get_mut(&domid) {
senders.retain(|sender| {
!matches!(sender.try_send(data.to_vec()), Err(TrySendError::Closed(_)))
});
if let Some(data) = data {
let buffer = buffers
.entry(domid)
.or_insert_with_key(|_| RawConsoleBuffer::boxed());
buffer.extend_from_slice(&data);
drop(buffers);
let mut listeners = self.listeners.lock().await;
if let Some(senders) = listeners.get_mut(&domid) {
senders.retain(|sender| {
!matches!(sender.try_send(data.to_vec()), Err(TrySendError::Closed(_)))
});
}
} else {
buffers.remove(&domid);
let mut listeners = self.listeners.lock().await;
listeners.remove(&domid);
}
}
Ok(())

View File

@ -1,25 +1,40 @@
use std::{pin::Pin, str::FromStr};
use async_stream::try_stream;
use futures::Stream;
use krata::v1::{
common::{Guest, GuestState, GuestStatus},
control::{
control_service_server::ControlService, ConsoleDataReply, ConsoleDataRequest,
CreateGuestReply, CreateGuestRequest, DestroyGuestReply, DestroyGuestRequest,
ListGuestsReply, ListGuestsRequest, ResolveGuestReply, ResolveGuestRequest,
WatchEventsReply, WatchEventsRequest,
use krata::{
idm::protocol::{
idm_request::Request as IdmRequestType, idm_response::Response as IdmResponseType,
IdmMetricsRequest,
},
v1::{
common::{Guest, GuestOciImageFormat, GuestState, GuestStatus},
control::{
control_service_server::ControlService, ConsoleDataReply, ConsoleDataRequest,
CreateGuestReply, CreateGuestRequest, DestroyGuestReply, DestroyGuestRequest,
ListGuestsReply, ListGuestsRequest, PullImageReply, PullImageRequest,
ReadGuestMetricsReply, ReadGuestMetricsRequest, ResolveGuestReply, ResolveGuestRequest,
SnoopIdmReply, SnoopIdmRequest, WatchEventsReply, WatchEventsRequest,
},
},
};
use krataoci::{
name::ImageName,
packer::{service::OciPackerService, OciImagePacked, OciPackedFormat},
progress::{OciProgress, OciProgressContext},
};
use std::{pin::Pin, str::FromStr};
use tokio::{
select,
sync::mpsc::{channel, Sender},
task::JoinError,
};
use tokio_stream::StreamExt;
use tonic::{Request, Response, Status, Streaming};
use uuid::Uuid;
use crate::{console::DaemonConsoleHandle, db::GuestStore, event::DaemonEventContext};
use crate::{
console::DaemonConsoleHandle, db::GuestStore, event::DaemonEventContext, idm::DaemonIdmHandle,
metrics::idm_metric_to_api, oci::convert_oci_progress,
};
pub struct ApiError {
message: String,
@ -40,25 +55,31 @@ impl From<ApiError> for Status {
}
#[derive(Clone)]
pub struct RuntimeControlService {
pub struct DaemonControlService {
events: DaemonEventContext,
console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
guests: GuestStore,
guest_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
}
impl RuntimeControlService {
impl DaemonControlService {
pub fn new(
events: DaemonEventContext,
console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
guests: GuestStore,
guest_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
) -> Self {
Self {
events,
console,
idm,
guests,
guest_reconciler_notify,
packer,
}
}
}
@ -68,14 +89,25 @@ enum ConsoleDataSelect {
Write(Option<Result<ConsoleDataRequest, tonic::Status>>),
}
enum PullImageSelect {
Progress(usize),
Completed(Result<Result<OciImagePacked, anyhow::Error>, JoinError>),
}
#[tonic::async_trait]
impl ControlService for RuntimeControlService {
impl ControlService for DaemonControlService {
type ConsoleDataStream =
Pin<Box<dyn Stream<Item = Result<ConsoleDataReply, Status>> + Send + 'static>>;
type PullImageStream =
Pin<Box<dyn Stream<Item = Result<PullImageReply, Status>> + Send + 'static>>;
type WatchEventsStream =
Pin<Box<dyn Stream<Item = Result<WatchEventsReply, Status>> + Send + 'static>>;
type SnoopIdmStream =
Pin<Box<dyn Stream<Item = Result<SnoopIdmReply, Status>> + Send + 'static>>;
async fn create_guest(
&self,
request: Request<CreateGuestRequest>,
@ -269,6 +301,123 @@ impl ControlService for RuntimeControlService {
Ok(Response::new(Box::pin(output) as Self::ConsoleDataStream))
}
async fn read_guest_metrics(
&self,
request: Request<ReadGuestMetricsRequest>,
) -> Result<Response<ReadGuestMetricsReply>, Status> {
let request = request.into_inner();
let uuid = Uuid::from_str(&request.guest_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let guest = self
.guests
.read(uuid)
.await
.map_err(|error| ApiError {
message: error.to_string(),
})?
.ok_or_else(|| ApiError {
message: "guest did not exist in the database".to_string(),
})?;
let Some(ref state) = guest.state else {
return Err(ApiError {
message: "guest did not have state".to_string(),
}
.into());
};
let domid = state.domid;
if domid == 0 {
return Err(ApiError {
message: "invalid domid on the guest".to_string(),
}
.into());
}
let client = self.idm.client(domid).await.map_err(|error| ApiError {
message: error.to_string(),
})?;
let response = client
.send(IdmRequestType::Metrics(IdmMetricsRequest {}))
.await
.map_err(|error| ApiError {
message: error.to_string(),
})?;
let mut reply = ReadGuestMetricsReply::default();
if let IdmResponseType::Metrics(metrics) = response {
reply.root = metrics.root.map(idm_metric_to_api);
}
Ok(Response::new(reply))
}
async fn pull_image(
&self,
request: Request<PullImageRequest>,
) -> Result<Response<Self::PullImageStream>, Status> {
let request = request.into_inner();
let name = ImageName::parse(&request.image).map_err(|err| ApiError {
message: err.to_string(),
})?;
let format = match request.format() {
GuestOciImageFormat::Unknown => OciPackedFormat::Squashfs,
GuestOciImageFormat::Squashfs => OciPackedFormat::Squashfs,
GuestOciImageFormat::Erofs => OciPackedFormat::Erofs,
};
let (sender, mut receiver) = channel::<OciProgress>(100);
let context = OciProgressContext::new(sender);
let our_packer = self.packer.clone();
let output = try_stream! {
let mut task = tokio::task::spawn(async move {
our_packer.request(name, format, context).await
});
loop {
let mut progresses = Vec::new();
let what = select! {
x = receiver.recv_many(&mut progresses, 10) => PullImageSelect::Progress(x),
x = &mut task => PullImageSelect::Completed(x),
};
match what {
PullImageSelect::Progress(count) => {
if count > 0 {
let progress = progresses.remove(progresses.len() - 1);
let reply = PullImageReply {
progress: Some(convert_oci_progress(progress)),
digest: String::new(),
format: GuestOciImageFormat::Unknown.into(),
};
yield reply;
}
},
PullImageSelect::Completed(result) => {
let result = result.map_err(|err| ApiError {
message: err.to_string(),
})?;
let packed = result.map_err(|err| ApiError {
message: err.to_string(),
})?;
let reply = PullImageReply {
progress: None,
digest: packed.digest,
format: match packed.format {
OciPackedFormat::Squashfs => GuestOciImageFormat::Squashfs.into(),
OciPackedFormat::Erofs => GuestOciImageFormat::Erofs.into(),
},
};
yield reply;
break;
},
}
}
};
Ok(Response::new(Box::pin(output) as Self::PullImageStream))
}
async fn watch_events(
&self,
request: Request<WatchEventsRequest>,
@ -282,4 +431,18 @@ impl ControlService for RuntimeControlService {
};
Ok(Response::new(Box::pin(output) as Self::WatchEventsStream))
}
async fn snoop_idm(
&self,
request: Request<SnoopIdmRequest>,
) -> Result<Response<Self::SnoopIdmStream>, Status> {
let _ = request.into_inner();
let mut messages = self.idm.snoop();
let output = try_stream! {
while let Ok(event) = messages.recv().await {
yield SnoopIdmReply { from: event.from, to: event.to, packet: Some(event.packet) };
}
};
Ok(Response::new(Box::pin(output) as Self::SnoopIdmStream))
}
}

View File

@ -6,10 +6,10 @@ use std::{
use anyhow::Result;
use krata::{
idm::protocol::{idm_event::Event, IdmPacket},
idm::protocol::{idm_event::Event, IdmEvent},
v1::common::{GuestExitInfo, GuestState, GuestStatus},
};
use log::error;
use log::{error, warn};
use tokio::{
select,
sync::{
@ -21,15 +21,12 @@ use tokio::{
};
use uuid::Uuid;
use crate::{
db::GuestStore,
idm::{DaemonIdmHandle, DaemonIdmSubscribeHandle},
};
use crate::{db::GuestStore, idm::DaemonIdmHandle};
pub type DaemonEvent = krata::v1::control::watch_events_reply::Event;
const EVENT_CHANNEL_QUEUE_LEN: usize = 1000;
const IDM_CHANNEL_QUEUE_LEN: usize = 1000;
const IDM_EVENT_CHANNEL_QUEUE_LEN: usize = 1000;
#[derive(Clone)]
pub struct DaemonEventContext {
@ -52,9 +49,9 @@ pub struct DaemonEventGenerator {
guest_reconciler_notify: Sender<Uuid>,
feed: broadcast::Receiver<DaemonEvent>,
idm: DaemonIdmHandle,
idms: HashMap<u32, (Uuid, DaemonIdmSubscribeHandle)>,
idm_sender: Sender<(u32, IdmPacket)>,
idm_receiver: Receiver<(u32, IdmPacket)>,
idms: HashMap<u32, (Uuid, JoinHandle<()>)>,
idm_sender: Sender<(u32, IdmEvent)>,
idm_receiver: Receiver<(u32, IdmEvent)>,
_event_sender: broadcast::Sender<DaemonEvent>,
}
@ -65,7 +62,7 @@ impl DaemonEventGenerator {
idm: DaemonIdmHandle,
) -> Result<(DaemonEventContext, DaemonEventGenerator)> {
let (sender, _) = broadcast::channel(EVENT_CHANNEL_QUEUE_LEN);
let (idm_sender, idm_receiver) = channel(IDM_CHANNEL_QUEUE_LEN);
let (idm_sender, idm_receiver) = channel(IDM_EVENT_CHANNEL_QUEUE_LEN);
let generator = DaemonEventGenerator {
guests,
guest_reconciler_notify,
@ -81,46 +78,55 @@ impl DaemonEventGenerator {
}
async fn handle_feed_event(&mut self, event: &DaemonEvent) -> Result<()> {
match event {
DaemonEvent::GuestChanged(changed) => {
let Some(ref guest) = changed.guest else {
return Ok(());
};
let DaemonEvent::GuestChanged(changed) = event;
let Some(ref guest) = changed.guest else {
return Ok(());
};
let Some(ref state) = guest.state else {
return Ok(());
};
let Some(ref state) = guest.state else {
return Ok(());
};
let status = state.status();
let id = Uuid::from_str(&guest.id)?;
let domid = state.domid;
match status {
GuestStatus::Started => {
if let Entry::Vacant(e) = self.idms.entry(domid) {
let subscribe =
self.idm.subscribe(domid, self.idm_sender.clone()).await?;
e.insert((id, subscribe));
let status = state.status();
let id = Uuid::from_str(&guest.id)?;
let domid = state.domid;
match status {
GuestStatus::Started => {
if let Entry::Vacant(e) = self.idms.entry(domid) {
let client = self.idm.client(domid).await?;
let mut receiver = client.subscribe().await?;
let sender = self.idm_sender.clone();
let task = tokio::task::spawn(async move {
loop {
let Ok(event) = receiver.recv().await else {
break;
};
if let Err(error) = sender.send((domid, event)).await {
warn!("unable to deliver idm event: {}", error);
}
}
}
GuestStatus::Destroyed => {
if let Some((_, handle)) = self.idms.remove(&domid) {
handle.unsubscribe().await?;
}
}
_ => {}
});
e.insert((id, task));
}
}
GuestStatus::Destroyed => {
if let Some((_, handle)) = self.idms.remove(&domid) {
handle.abort();
}
}
_ => {}
}
Ok(())
}
async fn handle_idm_packet(&mut self, id: Uuid, packet: IdmPacket) -> Result<()> {
if let Some(Event::Exit(exit)) = packet.event.and_then(|x| x.event) {
self.handle_exit_code(id, exit.code).await?;
async fn handle_idm_event(&mut self, id: Uuid, event: IdmEvent) -> Result<()> {
match event.event {
Some(Event::Exit(exit)) => self.handle_exit_code(id, exit.code).await,
None => Ok(()),
}
Ok(())
}
async fn handle_exit_code(&mut self, id: Uuid, code: i32) -> Result<()> {
@ -142,9 +148,9 @@ impl DaemonEventGenerator {
async fn evaluate(&mut self) -> Result<()> {
select! {
x = self.idm_receiver.recv() => match x {
Some((domid, packet)) => {
Some((domid, event)) => {
if let Some((id, _)) = self.idms.get(&domid) {
self.handle_idm_packet(*id, packet).await?;
self.handle_idm_event(*id, event).await?;
}
Ok(())
},
@ -159,7 +165,7 @@ impl DaemonEventGenerator {
Err(error) => {
Err(error.into())
}
}
},
}
}

View File

@ -1,53 +1,46 @@
use std::{collections::HashMap, sync::Arc};
use std::{
collections::{hash_map::Entry, HashMap},
sync::Arc,
};
use anyhow::Result;
use anyhow::{anyhow, Result};
use bytes::{Buf, BytesMut};
use krata::idm::protocol::IdmPacket;
use krata::idm::{
client::{IdmBackend, IdmClient},
protocol::IdmPacket,
};
use kratart::channel::ChannelService;
use log::{error, warn};
use prost::Message;
use tokio::{
select,
sync::{
mpsc::{Receiver, Sender},
broadcast,
mpsc::{channel, Receiver, Sender},
Mutex,
},
task::JoinHandle,
};
type ListenerMap = Arc<Mutex<HashMap<u32, Sender<(u32, IdmPacket)>>>>;
type BackendFeedMap = Arc<Mutex<HashMap<u32, Sender<IdmPacket>>>>;
type ClientMap = Arc<Mutex<HashMap<u32, IdmClient>>>;
#[derive(Clone)]
pub struct DaemonIdmHandle {
listeners: ListenerMap,
clients: ClientMap,
feeds: BackendFeedMap,
tx_sender: Sender<(u32, IdmPacket)>,
task: Arc<JoinHandle<()>>,
}
#[derive(Clone)]
pub struct DaemonIdmSubscribeHandle {
domid: u32,
listeners: ListenerMap,
}
impl DaemonIdmSubscribeHandle {
pub async fn unsubscribe(&self) -> Result<()> {
let mut guard = self.listeners.lock().await;
let _ = guard.remove(&self.domid);
Ok(())
}
snoop_sender: broadcast::Sender<DaemonIdmSnoopPacket>,
}
impl DaemonIdmHandle {
pub async fn subscribe(
&self,
domid: u32,
sender: Sender<(u32, IdmPacket)>,
) -> Result<DaemonIdmSubscribeHandle> {
let mut guard = self.listeners.lock().await;
guard.insert(domid, sender);
Ok(DaemonIdmSubscribeHandle {
domid,
listeners: self.listeners.clone(),
})
pub fn snoop(&self) -> broadcast::Receiver<DaemonIdmSnoopPacket> {
self.snoop_sender.subscribe()
}
pub async fn client(&self, domid: u32) -> Result<IdmClient> {
client_or_create(domid, &self.tx_sender, &self.clients, &self.feeds).await
}
}
@ -59,70 +52,137 @@ impl Drop for DaemonIdmHandle {
}
}
#[derive(Clone)]
pub struct DaemonIdmSnoopPacket {
pub from: u32,
pub to: u32,
pub packet: IdmPacket,
}
pub struct DaemonIdm {
listeners: ListenerMap,
receiver: Receiver<(u32, Vec<u8>)>,
clients: ClientMap,
feeds: BackendFeedMap,
tx_sender: Sender<(u32, IdmPacket)>,
tx_raw_sender: Sender<(u32, Vec<u8>)>,
tx_receiver: Receiver<(u32, IdmPacket)>,
rx_receiver: Receiver<(u32, Option<Vec<u8>>)>,
snoop_sender: broadcast::Sender<DaemonIdmSnoopPacket>,
task: JoinHandle<()>,
}
impl DaemonIdm {
pub async fn new() -> Result<DaemonIdm> {
let (service, _, receiver) = ChannelService::new("krata-channel".to_string(), None).await?;
let (service, tx_raw_sender, rx_receiver) =
ChannelService::new("krata-channel".to_string(), None).await?;
let (tx_sender, tx_receiver) = channel(100);
let (snoop_sender, _) = broadcast::channel(100);
let task = service.launch().await?;
let listeners = Arc::new(Mutex::new(HashMap::new()));
let clients = Arc::new(Mutex::new(HashMap::new()));
let feeds = Arc::new(Mutex::new(HashMap::new()));
Ok(DaemonIdm {
receiver,
rx_receiver,
tx_receiver,
tx_sender,
tx_raw_sender,
snoop_sender,
task,
listeners,
clients,
feeds,
})
}
pub async fn launch(mut self) -> Result<DaemonIdmHandle> {
let listeners = self.listeners.clone();
let clients = self.clients.clone();
let feeds = self.feeds.clone();
let tx_sender = self.tx_sender.clone();
let snoop_sender = self.snoop_sender.clone();
let task = tokio::task::spawn(async move {
let mut buffers: HashMap<u32, BytesMut> = HashMap::new();
if let Err(error) = self.process(&mut buffers).await {
while let Err(error) = self.process(&mut buffers).await {
error!("failed to process idm: {}", error);
}
});
Ok(DaemonIdmHandle {
listeners,
clients,
feeds,
tx_sender,
snoop_sender,
task: Arc::new(task),
})
}
async fn process(&mut self, buffers: &mut HashMap<u32, BytesMut>) -> Result<()> {
loop {
let Some((domid, data)) = self.receiver.recv().await else {
break;
};
select! {
x = self.rx_receiver.recv() => match x {
Some((domid, data)) => {
if let Some(data) = data {
let buffer = buffers.entry(domid).or_insert_with_key(|_| BytesMut::new());
buffer.extend_from_slice(&data);
if buffer.len() < 6 {
continue;
}
let buffer = buffers.entry(domid).or_insert_with_key(|_| BytesMut::new());
buffer.extend_from_slice(&data);
if buffer.len() < 2 {
continue;
}
let size = (buffer[0] as u16 | (buffer[1] as u16) << 8) as usize;
let needed = size + 2;
if buffer.len() < needed {
continue;
}
let mut packet = buffer.split_to(needed);
packet.advance(2);
match IdmPacket::decode(packet) {
Ok(packet) => {
let guard = self.listeners.lock().await;
if let Some(sender) = guard.get(&domid) {
if let Err(error) = sender.try_send((domid, packet)) {
warn!("dropped idm packet from domain {}: {}", domid, error);
if buffer[0] != 0xff || buffer[1] != 0xff {
buffer.clear();
continue;
}
let size = (buffer[2] as u32 | (buffer[3] as u32) << 8 | (buffer[4] as u32) << 16 | (buffer[5] as u32) << 24) as usize;
let needed = size + 6;
if buffer.len() < needed {
continue;
}
let mut packet = buffer.split_to(needed);
packet.advance(6);
match IdmPacket::decode(packet) {
Ok(packet) => {
let _ = client_or_create(domid, &self.tx_sender, &self.clients, &self.feeds).await?;
let guard = self.feeds.lock().await;
if let Some(feed) = guard.get(&domid) {
let _ = feed.try_send(packet.clone());
}
let _ = self.snoop_sender.send(DaemonIdmSnoopPacket { from: domid, to: 0, packet });
}
Err(packet) => {
warn!("received invalid packet from domain {}: {}", domid, packet);
}
}
} else {
let mut clients = self.clients.lock().await;
let mut feeds = self.feeds.lock().await;
clients.remove(&domid);
feeds.remove(&domid);
}
},
None => {
break;
}
},
x = self.tx_receiver.recv() => match x {
Some((domid, packet)) => {
let data = packet.encode_to_vec();
let mut buffer = vec![0u8; 6];
let length = data.len() as u32;
buffer[0] = 0xff;
buffer[1] = 0xff;
buffer[2] = length as u8;
buffer[3] = (length << 8) as u8;
buffer[4] = (length << 16) as u8;
buffer[5] = (length << 24) as u8;
buffer.extend_from_slice(&data);
self.tx_raw_sender.send((domid, buffer)).await?;
let _ = self.snoop_sender.send(DaemonIdmSnoopPacket { from: 0, to: domid, packet });
},
None => {
break;
}
}
Err(packet) => {
warn!("received invalid packet from domain {}: {}", domid, packet);
}
}
};
}
Ok(())
}
@ -133,3 +193,50 @@ impl Drop for DaemonIdm {
self.task.abort();
}
}
async fn client_or_create(
domid: u32,
tx_sender: &Sender<(u32, IdmPacket)>,
clients: &ClientMap,
feeds: &BackendFeedMap,
) -> Result<IdmClient> {
let mut clients = clients.lock().await;
let mut feeds = feeds.lock().await;
match clients.entry(domid) {
Entry::Occupied(entry) => Ok(entry.get().clone()),
Entry::Vacant(entry) => {
let (rx_sender, rx_receiver) = channel(100);
feeds.insert(domid, rx_sender);
let backend = IdmDaemonBackend {
domid,
rx_receiver,
tx_sender: tx_sender.clone(),
};
let client = IdmClient::new(Box::new(backend) as Box<dyn IdmBackend>).await?;
entry.insert(client.clone());
Ok(client)
}
}
}
pub struct IdmDaemonBackend {
domid: u32,
rx_receiver: Receiver<IdmPacket>,
tx_sender: Sender<(u32, IdmPacket)>,
}
#[async_trait::async_trait]
impl IdmBackend for IdmDaemonBackend {
async fn recv(&mut self) -> Result<IdmPacket> {
if let Some(packet) = self.rx_receiver.recv().await {
Ok(packet)
} else {
Err(anyhow!("idm receive channel closed"))
}
}
async fn send(&mut self, packet: IdmPacket) -> Result<()> {
self.tx_sender.send((self.domid, packet)).await?;
Ok(())
}
}

View File

@ -2,15 +2,17 @@ use std::{net::SocketAddr, path::PathBuf, str::FromStr};
use anyhow::Result;
use console::{DaemonConsole, DaemonConsoleHandle};
use control::RuntimeControlService;
use control::DaemonControlService;
use db::GuestStore;
use event::{DaemonEventContext, DaemonEventGenerator};
use idm::{DaemonIdm, DaemonIdmHandle};
use krata::{dial::ControlDialAddress, v1::control::control_service_server::ControlServiceServer};
use krataoci::{packer::service::OciPackerService, registry::OciPlatform};
use kratart::Runtime;
use log::info;
use reconcile::guest::GuestReconciler;
use tokio::{
fs,
net::UnixListener,
sync::mpsc::{channel, Sender},
task::JoinHandle,
@ -24,6 +26,8 @@ pub mod control;
pub mod db;
pub mod event;
pub mod idm;
pub mod metrics;
pub mod oci;
pub mod reconcile;
pub struct Daemon {
@ -33,14 +37,23 @@ pub struct Daemon {
guest_reconciler_task: JoinHandle<()>,
guest_reconciler_notify: Sender<Uuid>,
generator_task: JoinHandle<()>,
_idm: DaemonIdmHandle,
idm: DaemonIdmHandle,
console: DaemonConsoleHandle,
packer: OciPackerService,
}
const GUEST_RECONCILER_QUEUE_LEN: usize = 1000;
impl Daemon {
pub async fn new(store: String, runtime: Runtime) -> Result<Self> {
pub async fn new(store: String) -> Result<Self> {
let mut image_cache_dir = PathBuf::from(store.clone());
image_cache_dir.push("cache");
image_cache_dir.push("image");
fs::create_dir_all(&image_cache_dir).await?;
let packer = OciPackerService::new(None, &image_cache_dir, OciPlatform::current())?;
let runtime = Runtime::new(store.clone()).await?;
let guests_db_path = format!("{}/guests.db", store);
let guests = GuestStore::open(&PathBuf::from(guests_db_path))?;
let (guest_reconciler_notify, guest_reconciler_receiver) =
@ -57,11 +70,13 @@ impl Daemon {
guests.clone(),
events.clone(),
runtime_for_reconciler,
packer.clone(),
guest_reconciler_notify.clone(),
)?;
let guest_reconciler_task = guest_reconciler.launch(guest_reconciler_receiver).await?;
let generator_task = generator.launch().await?;
Ok(Self {
store,
guests,
@ -69,17 +84,20 @@ impl Daemon {
guest_reconciler_task,
guest_reconciler_notify,
generator_task,
_idm: idm,
idm,
console,
packer,
})
}
pub async fn listen(&mut self, addr: ControlDialAddress) -> Result<()> {
let control_service = RuntimeControlService::new(
let control_service = DaemonControlService::new(
self.events.clone(),
self.console.clone(),
self.idm.clone(),
self.guests.clone(),
self.guest_reconciler_notify.clone(),
self.packer.clone(),
);
let mut server = Server::builder();
@ -105,7 +123,7 @@ impl Daemon {
ControlDialAddress::UnixSocket { path } => {
let path = PathBuf::from(path);
if path.exists() {
tokio::fs::remove_file(&path).await?;
fs::remove_file(&path).await?;
}
let listener = UnixListener::bind(path)?;
let stream = UnixListenerStream::new(listener);

View File

@ -0,0 +1,27 @@
use krata::{
idm::protocol::{IdmMetricFormat, IdmMetricNode},
v1::common::{GuestMetricFormat, GuestMetricNode},
};
fn idm_metric_format_to_api(format: IdmMetricFormat) -> GuestMetricFormat {
match format {
IdmMetricFormat::Unknown => GuestMetricFormat::Unknown,
IdmMetricFormat::Bytes => GuestMetricFormat::Bytes,
IdmMetricFormat::Integer => GuestMetricFormat::Integer,
IdmMetricFormat::DurationSeconds => GuestMetricFormat::DurationSeconds,
}
}
pub fn idm_metric_to_api(node: IdmMetricNode) -> GuestMetricNode {
let format = node.format();
GuestMetricNode {
name: node.name,
value: node.value,
format: idm_metric_format_to_api(format).into(),
children: node
.children
.into_iter()
.map(idm_metric_to_api)
.collect::<Vec<_>>(),
}
}

41
crates/daemon/src/oci.rs Normal file
View File

@ -0,0 +1,41 @@
use krata::v1::control::{
PullImageProgress, PullImageProgressLayer, PullImageProgressLayerPhase, PullImageProgressPhase,
};
use krataoci::progress::{OciProgress, OciProgressLayer, OciProgressLayerPhase, OciProgressPhase};
fn convert_oci_layer_progress(layer: OciProgressLayer) -> PullImageProgressLayer {
PullImageProgressLayer {
id: layer.id,
phase: match layer.phase {
OciProgressLayerPhase::Waiting => PullImageProgressLayerPhase::Waiting,
OciProgressLayerPhase::Downloading => PullImageProgressLayerPhase::Downloading,
OciProgressLayerPhase::Downloaded => PullImageProgressLayerPhase::Downloaded,
OciProgressLayerPhase::Extracting => PullImageProgressLayerPhase::Extracting,
OciProgressLayerPhase::Extracted => PullImageProgressLayerPhase::Extracted,
}
.into(),
value: layer.value,
total: layer.total,
}
}
pub fn convert_oci_progress(oci: OciProgress) -> PullImageProgress {
PullImageProgress {
phase: match oci.phase {
OciProgressPhase::Resolving => PullImageProgressPhase::Resolving,
OciProgressPhase::Resolved => PullImageProgressPhase::Resolved,
OciProgressPhase::ConfigAcquire => PullImageProgressPhase::ConfigAcquire,
OciProgressPhase::LayerAcquire => PullImageProgressPhase::LayerAcquire,
OciProgressPhase::Packing => PullImageProgressPhase::Packing,
OciProgressPhase::Complete => PullImageProgressPhase::Complete,
}
.into(),
layers: oci
.layers
.into_values()
.map(convert_oci_layer_progress)
.collect::<Vec<_>>(),
value: oci.value,
total: oci.total,
}
}

View File

@ -5,13 +5,15 @@ use std::{
};
use anyhow::{anyhow, Result};
use krata::launchcfg::LaunchPackedFormat;
use krata::v1::{
common::{
guest_image_spec::Image, Guest, GuestErrorInfo, GuestExitInfo, GuestNetworkState,
GuestState, GuestStatus,
GuestOciImageFormat, GuestState, GuestStatus,
},
control::GuestChangedEvent,
};
use krataoci::packer::{service::OciPackerService, OciPackedFormat};
use kratart::{launch::GuestLaunchRequest, GuestInfo, Runtime};
use log::{error, info, trace, warn};
use tokio::{
@ -54,6 +56,7 @@ pub struct GuestReconciler {
guests: GuestStore,
events: DaemonEventContext,
runtime: Runtime,
packer: OciPackerService,
tasks: Arc<Mutex<HashMap<Uuid, GuestReconcilerEntry>>>,
guest_reconciler_notify: Sender<Uuid>,
reconcile_lock: Arc<RwLock<()>>,
@ -64,12 +67,14 @@ impl GuestReconciler {
guests: GuestStore,
events: DaemonEventContext,
runtime: Runtime,
packer: OciPackerService,
guest_reconciler_notify: Sender<Uuid>,
) -> Result<Self> {
Ok(Self {
guests,
events,
runtime,
packer,
tasks: Arc::new(Mutex::new(HashMap::new())),
guest_reconciler_notify,
reconcile_lock: Arc::new(RwLock::with_max_readers((), PARALLEL_LIMIT)),
@ -232,19 +237,38 @@ impl GuestReconciler {
return Err(anyhow!("oci spec not specified"));
}
};
let task = spec.task.as_ref().cloned().unwrap_or_default();
let image = self
.packer
.recall(
&oci.digest,
match oci.format() {
GuestOciImageFormat::Unknown => OciPackedFormat::Squashfs,
GuestOciImageFormat::Squashfs => OciPackedFormat::Squashfs,
GuestOciImageFormat::Erofs => OciPackedFormat::Erofs,
},
)
.await?;
let Some(image) = image else {
return Err(anyhow!(
"image {} in the requested format did not exist",
oci.digest
));
};
let info = self
.runtime
.launch(GuestLaunchRequest {
format: LaunchPackedFormat::Squashfs,
uuid: Some(uuid),
name: if spec.name.is_empty() {
None
} else {
Some(&spec.name)
Some(spec.name.clone())
},
image: &oci.image,
image,
vcpus: spec.vcpus,
mem: spec.mem,
env: task

View File

@ -14,8 +14,8 @@ cgroups-rs = { workspace = true }
env_logger = { workspace = true }
futures = { workspace = true }
ipnetwork = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.7" }
krata = { path = "../krata", version = "^0.0.9" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.9" }
libc = { workspace = true }
log = { workspace = true }
nix = { workspace = true, features = ["ioctl", "process", "fs"] }
@ -25,8 +25,8 @@ rtnetlink = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
sys-mount = { workspace = true }
sysinfo = { workspace = true }
tokio = { workspace = true }
walkdir = { workspace = true }
[lib]
name = "krataguest"

View File

@ -23,6 +23,8 @@ async fn main() -> Result<()> {
if let Err(error) = guest.init().await {
error!("failed to initialize guest: {}", error);
death(127).await?;
return Ok(());
}
death(1).await?;
Ok(())
}

View File

@ -1,16 +1,20 @@
use crate::{
childwait::{ChildEvent, ChildWait},
death,
metrics::MetricsCollector,
};
use anyhow::Result;
use cgroups_rs::Cgroup;
use krata::idm::{
client::IdmClient,
protocol::{idm_event::Event, IdmEvent, IdmExitEvent, IdmPacket},
protocol::{
idm_event::Event, idm_request::Request, idm_response::Response, IdmEvent, IdmExitEvent,
IdmMetricsResponse, IdmPingResponse, IdmRequest,
},
};
use log::error;
use log::debug;
use nix::unistd::Pid;
use tokio::select;
use tokio::{select, sync::broadcast};
pub struct GuestBackground {
idm: IdmClient,
@ -30,16 +34,37 @@ impl GuestBackground {
}
pub async fn run(&mut self) -> Result<()> {
let mut event_subscription = self.idm.subscribe().await?;
let mut requests_subscription = self.idm.requests().await?;
loop {
select! {
x = self.idm.receiver.recv() => match x {
Some(_packet) => {
x = event_subscription.recv() => match x {
Ok(_event) => {
},
None => {
error!("idm packet channel closed");
Err(broadcast::error::RecvError::Closed) => {
debug!("idm packet channel closed");
break;
},
_ => {
continue;
}
},
x = requests_subscription.recv() => match x {
Ok(request) => {
self.handle_idm_request(request).await?;
},
Err(broadcast::error::RecvError::Closed) => {
debug!("idm packet channel closed");
break;
},
_ => {
continue;
}
},
@ -54,14 +79,34 @@ impl GuestBackground {
Ok(())
}
async fn handle_idm_request(&mut self, packet: IdmRequest) -> Result<()> {
let id = packet.id;
match packet.request {
Some(Request::Ping(_)) => {
self.idm
.respond(id, Response::Ping(IdmPingResponse {}))
.await?;
}
Some(Request::Metrics(_)) => {
let metrics = MetricsCollector::new()?;
let root = metrics.collect()?;
let response = IdmMetricsResponse { root: Some(root) };
self.idm.respond(id, Response::Metrics(response)).await?;
}
None => {}
}
Ok(())
}
async fn child_event(&mut self, event: ChildEvent) -> Result<()> {
if event.pid == self.child {
self.idm
.sender
.send(IdmPacket {
event: Some(IdmEvent {
event: Some(Event::Exit(IdmExitEvent { code: event.status })),
}),
.emit(IdmEvent {
event: Some(Event::Exit(IdmExitEvent { code: event.status })),
})
.await?;
death(event.status).await?;

View File

@ -4,7 +4,7 @@ use futures::stream::TryStreamExt;
use ipnetwork::IpNetwork;
use krata::ethtool::EthtoolHandle;
use krata::idm::client::IdmClient;
use krata::launchcfg::{LaunchInfo, LaunchNetwork};
use krata::launchcfg::{LaunchInfo, LaunchNetwork, LaunchPackedFormat};
use libc::{sethostname, setsid, TIOCSCTTY};
use log::{trace, warn};
use nix::ioctl_write_int_bad;
@ -17,14 +17,12 @@ use std::fs::{File, OpenOptions, Permissions};
use std::io;
use std::net::{Ipv4Addr, Ipv6Addr};
use std::os::fd::AsRawFd;
use std::os::linux::fs::MetadataExt;
use std::os::unix::ffi::OsStrExt;
use std::os::unix::fs::{chroot, PermissionsExt};
use std::path::{Path, PathBuf};
use std::str::FromStr;
use sys_mount::{FilesystemType, Mount, MountFlags};
use tokio::fs;
use walkdir::WalkDir;
use crate::background::GuestBackground;
@ -82,13 +80,14 @@ impl GuestInit {
let idm = IdmClient::open("/dev/hvc1")
.await
.map_err(|x| anyhow!("failed to open idm client: {}", x))?;
self.mount_squashfs_images().await?;
self.mount_config_image().await?;
let config = self.parse_image_config().await?;
let launch = self.parse_launch_config().await?;
self.mount_root_image(launch.root.format.clone()).await?;
self.mount_new_root().await?;
self.nuke_initrd().await?;
self.bind_new_root().await?;
if let Some(hostname) = launch.hostname.clone() {
@ -101,11 +100,19 @@ impl GuestInit {
if result != 0 {
warn!("failed to set hostname: {}", result);
}
let etc = PathBuf::from_str("/etc")?;
if !etc.exists() {
fs::create_dir(&etc).await?;
}
let mut etc_hostname = etc;
etc_hostname.push("hostname");
fs::write(&etc_hostname, hostname + "\n").await?;
}
if let Some(network) = &launch.network {
trace!("initializing network");
if let Err(error) = self.network_setup(network).await {
if let Err(error) = self.network_setup(&launch, network).await {
warn!("failed to initialize network: {}", error);
}
}
@ -180,24 +187,41 @@ impl GuestInit {
Ok(())
}
async fn mount_squashfs_images(&mut self) -> Result<()> {
trace!("mounting squashfs images");
let image_mount_path = Path::new(IMAGE_MOUNT_PATH);
async fn mount_config_image(&mut self) -> Result<()> {
trace!("mounting config image");
let config_mount_path = Path::new(CONFIG_MOUNT_PATH);
self.mount_squashfs(Path::new(IMAGE_BLOCK_DEVICE_PATH), image_mount_path)
.await?;
self.mount_squashfs(Path::new(CONFIG_BLOCK_DEVICE_PATH), config_mount_path)
self.mount_image(
Path::new(CONFIG_BLOCK_DEVICE_PATH),
config_mount_path,
LaunchPackedFormat::Squashfs,
)
.await?;
Ok(())
}
async fn mount_root_image(&mut self, format: LaunchPackedFormat) -> Result<()> {
trace!("mounting root image");
let image_mount_path = Path::new(IMAGE_MOUNT_PATH);
self.mount_image(Path::new(IMAGE_BLOCK_DEVICE_PATH), image_mount_path, format)
.await?;
Ok(())
}
async fn mount_squashfs(&mut self, from: &Path, to: &Path) -> Result<()> {
trace!("mounting squashfs image {:?} to {:?}", from, to);
async fn mount_image(
&mut self,
from: &Path,
to: &Path,
format: LaunchPackedFormat,
) -> Result<()> {
trace!("mounting {:?} image {:?} to {:?}", format, from, to);
if !to.is_dir() {
fs::create_dir(to).await?;
}
Mount::builder()
.fstype(FilesystemType::Manual("squashfs"))
.fstype(FilesystemType::Manual(match format {
LaunchPackedFormat::Squashfs => "squashfs",
LaunchPackedFormat::Erofs => "erofs",
}))
.flags(MountFlags::RDONLY)
.mount(from, to)?;
Ok(())
@ -271,40 +295,6 @@ impl GuestInit {
Ok(serde_json::from_str(&content)?)
}
async fn nuke_initrd(&mut self) -> Result<()> {
trace!("nuking initrd");
let initrd_dev = fs::metadata("/").await?.st_dev();
for item in WalkDir::new("/")
.same_file_system(true)
.follow_links(false)
.contents_first(true)
{
if item.is_err() {
continue;
}
let item = item?;
let metadata = match item.metadata() {
Ok(value) => value,
Err(_) => continue,
};
if metadata.st_dev() != initrd_dev {
continue;
}
if metadata.is_symlink() || metadata.is_file() {
let _ = fs::remove_file(item.path()).await;
trace!("deleting file {:?}", item.path());
} else if metadata.is_dir() {
let _ = fs::remove_dir(item.path()).await;
trace!("deleting directory {:?}", item.path());
}
}
trace!("nuked initrd");
Ok(())
}
async fn bind_new_root(&mut self) -> Result<()> {
self.mount_move_subtree(Path::new(SYS_PATH), Path::new(NEW_ROOT_SYS_PATH))
.await?;
@ -324,7 +314,7 @@ impl GuestInit {
Ok(())
}
async fn network_setup(&mut self, network: &LaunchNetwork) -> Result<()> {
async fn network_setup(&mut self, cfg: &LaunchInfo, network: &LaunchNetwork) -> Result<()> {
trace!("setting up network for link");
let etc = PathBuf::from_str("/etc")?;
@ -332,14 +322,33 @@ impl GuestInit {
fs::create_dir(etc).await?;
}
let resolv = PathBuf::from_str("/etc/resolv.conf")?;
let mut lines = vec!["# krata resolver configuration".to_string()];
for nameserver in &network.resolver.nameservers {
lines.push(format!("nameserver {}", nameserver));
{
let mut lines = vec!["# krata resolver configuration".to_string()];
for nameserver in &network.resolver.nameservers {
lines.push(format!("nameserver {}", nameserver));
}
let mut conf = lines.join("\n");
conf.push('\n');
fs::write(resolv, conf).await?;
}
let hosts = PathBuf::from_str("/etc/hosts")?;
if let Some(ref hostname) = cfg.hostname {
let mut lines = if hosts.exists() {
fs::read_to_string(&hosts)
.await?
.lines()
.map(|x| x.to_string())
.collect::<Vec<_>>()
} else {
vec!["127.0.0.1 localhost".to_string()]
};
lines.push(format!("127.0.1.1 {}", hostname));
fs::write(&hosts, lines.join("\n") + "\n").await?;
}
let mut conf = lines.join("\n");
conf.push('\n');
fs::write(resolv, conf).await?;
self.network_configure_ethtool(network).await?;
self.network_configure_link(network).await?;
Ok(())
@ -457,7 +466,12 @@ impl GuestInit {
}
env.extend(launch.env.clone());
env.insert("KRATA_CONTAINER".to_string(), "1".to_string());
env.insert("TERM".to_string(), "vt100".to_string());
// If we were not provided a terminal definition in our launch manifest, we
// default to xterm as most terminal emulators support the xterm control codes.
if !env.contains_key("TERM") {
env.insert("TERM".to_string(), "xterm".to_string());
}
let path = GuestInit::resolve_executable(&env, path.into())?;
let Some(file_name) = path.file_name() else {

View File

@ -7,6 +7,7 @@ use xenstore::{XsdClient, XsdInterface};
pub mod background;
pub mod childwait;
pub mod init;
pub mod metrics;
pub async fn death(code: c_int) -> Result<()> {
let store = XsdClient::open().await?;

121
crates/guest/src/metrics.rs Normal file
View File

@ -0,0 +1,121 @@
use std::{ops::Add, path::Path};
use anyhow::Result;
use krata::idm::protocol::{IdmMetricFormat, IdmMetricNode};
use sysinfo::Process;
pub struct MetricsCollector {}
impl MetricsCollector {
pub fn new() -> Result<Self> {
Ok(MetricsCollector {})
}
pub fn collect(&self) -> Result<IdmMetricNode> {
let mut sysinfo = sysinfo::System::new();
Ok(IdmMetricNode::structural(
"guest",
vec![
self.collect_system(&mut sysinfo)?,
self.collect_processes(&mut sysinfo)?,
],
))
}
fn collect_system(&self, sysinfo: &mut sysinfo::System) -> Result<IdmMetricNode> {
sysinfo.refresh_memory();
Ok(IdmMetricNode::structural(
"system",
vec![IdmMetricNode::structural(
"memory",
vec![
IdmMetricNode::value("total", sysinfo.total_memory(), IdmMetricFormat::Bytes),
IdmMetricNode::value("used", sysinfo.used_memory(), IdmMetricFormat::Bytes),
IdmMetricNode::value("free", sysinfo.free_memory(), IdmMetricFormat::Bytes),
],
)],
))
}
fn collect_processes(&self, sysinfo: &mut sysinfo::System) -> Result<IdmMetricNode> {
sysinfo.refresh_processes();
let mut processes = Vec::new();
let mut sysinfo_processes = sysinfo.processes().values().collect::<Vec<_>>();
sysinfo_processes.sort_by_key(|x| x.pid());
for process in sysinfo_processes {
if process.thread_kind().is_some() {
continue;
}
processes.push(MetricsCollector::process_node(process)?);
}
Ok(IdmMetricNode::structural("process", processes))
}
fn process_node(process: &Process) -> Result<IdmMetricNode> {
let mut metrics = vec![];
if let Some(parent) = process.parent() {
metrics.push(IdmMetricNode::value(
"parent",
parent.as_u32() as u64,
IdmMetricFormat::Integer,
));
}
if let Some(exe) = process.exe().and_then(path_as_str) {
metrics.push(IdmMetricNode::raw_value("executable", exe));
}
if let Some(working_directory) = process.cwd().and_then(path_as_str) {
metrics.push(IdmMetricNode::raw_value("cwd", working_directory));
}
let cmdline = process.cmd().to_vec();
metrics.push(IdmMetricNode::raw_value("cmdline", cmdline));
metrics.push(IdmMetricNode::structural(
"memory",
vec![
IdmMetricNode::value("resident", process.memory(), IdmMetricFormat::Bytes),
IdmMetricNode::value("virtual", process.virtual_memory(), IdmMetricFormat::Bytes),
],
));
metrics.push(IdmMetricNode::value(
"lifetime",
process.run_time(),
IdmMetricFormat::DurationSeconds,
));
metrics.push(IdmMetricNode::value(
"uid",
process.user_id().map(|x| (*x).add(0)).unwrap_or(0) as f64,
IdmMetricFormat::Integer,
));
metrics.push(IdmMetricNode::value(
"gid",
process.group_id().map(|x| (*x).add(0)).unwrap_or(0) as f64,
IdmMetricFormat::Integer,
));
metrics.push(IdmMetricNode::value(
"euid",
process
.effective_user_id()
.map(|x| (*x).add(0))
.unwrap_or(0) as f64,
IdmMetricFormat::Integer,
));
metrics.push(IdmMetricNode::value(
"egid",
process.effective_group_id().map(|x| x.add(0)).unwrap_or(0) as f64,
IdmMetricFormat::Integer,
));
Ok(IdmMetricNode::structural(
process.pid().to_string(),
metrics,
))
}
}
fn path_as_str(path: &Path) -> Option<String> {
String::from_utf8(path.as_os_str().as_encoded_bytes().to_vec()).ok()
}

View File

@ -10,12 +10,15 @@ resolver = "2"
[dependencies]
anyhow = { workspace = true }
async-trait = { workspace = true }
bytes = { workspace = true }
libc = { workspace = true }
log = { workspace = true }
once_cell = { workspace = true }
prost = { workspace = true }
prost-reflect = { workspace = true }
prost-types = { workspace = true }
scopeguard = { workspace = true }
serde = { workspace = true }
tonic = { workspace = true }
tokio = { workspace = true }

View File

@ -6,18 +6,12 @@ fn main() -> Result<()> {
.descriptor_pool("crate::DESCRIPTOR_POOL")
.configure(
&mut config,
&[
"proto/krata/v1/control.proto",
"proto/krata/internal/idm.proto",
],
&["proto/krata/v1/control.proto", "proto/krata/bus/idm.proto"],
&["proto/"],
)?;
tonic_build::configure().compile_with_config(
config,
&[
"proto/krata/v1/control.proto",
"proto/krata/internal/idm.proto",
],
&["proto/krata/v1/control.proto", "proto/krata/bus/idm.proto"],
&["proto/"],
)?;
Ok(())

View File

@ -0,0 +1,67 @@
syntax = "proto3";
package krata.bus.idm;
option java_multiple_files = true;
option java_package = "dev.krata.proto.bus.idm";
option java_outer_classname = "IdmProto";
import "google/protobuf/struct.proto";
message IdmPacket {
oneof content {
IdmEvent event = 1;
IdmRequest request = 2;
IdmResponse response = 3;
}
}
message IdmEvent {
oneof event {
IdmExitEvent exit = 1;
}
}
message IdmExitEvent {
int32 code = 1;
}
message IdmRequest {
uint64 id = 1;
oneof request {
IdmPingRequest ping = 2;
IdmMetricsRequest metrics = 3;
}
}
message IdmPingRequest {}
message IdmMetricsRequest {}
message IdmResponse {
uint64 id = 1;
oneof response {
IdmPingResponse ping = 2;
IdmMetricsResponse metrics = 3;
}
}
message IdmPingResponse {}
message IdmMetricsResponse {
IdmMetricNode root = 1;
}
message IdmMetricNode {
string name = 1;
google.protobuf.Value value = 2;
IdmMetricFormat format = 3;
repeated IdmMetricNode children = 4;
}
enum IdmMetricFormat {
IDM_METRIC_FORMAT_UNKNOWN = 0;
IDM_METRIC_FORMAT_BYTES = 1;
IDM_METRIC_FORMAT_INTEGER = 2;
IDM_METRIC_FORMAT_DURATION_SECONDS = 3;
}

View File

@ -1,21 +0,0 @@
syntax = "proto3";
package krata.internal.idm;
option java_multiple_files = true;
option java_package = "dev.krata.proto.internal.idm";
option java_outer_classname = "IdmProto";
message IdmExitEvent {
int32 code = 1;
}
message IdmEvent {
oneof event {
IdmExitEvent exit = 1;
}
}
message IdmPacket {
IdmEvent event = 1;
}

View File

@ -6,6 +6,8 @@ option java_multiple_files = true;
option java_package = "dev.krata.proto.v1.common";
option java_outer_classname = "CommonProto";
import "google/protobuf/struct.proto";
message Guest {
string id = 1;
GuestSpec spec = 2;
@ -27,8 +29,15 @@ message GuestImageSpec {
}
}
enum GuestOciImageFormat {
GUEST_OCI_IMAGE_FORMAT_UNKNOWN = 0;
GUEST_OCI_IMAGE_FORMAT_SQUASHFS = 1;
GUEST_OCI_IMAGE_FORMAT_EROFS = 2;
}
message GuestOciImageSpec {
string image = 1;
string digest = 1;
GuestOciImageFormat format = 2;
}
message GuestTaskSpec {
@ -80,3 +89,17 @@ message GuestExitInfo {
message GuestErrorInfo {
string message = 1;
}
message GuestMetricNode {
string name = 1;
google.protobuf.Value value = 2;
GuestMetricFormat format = 3;
repeated GuestMetricNode children = 4;
}
enum GuestMetricFormat {
GUEST_METRIC_FORMAT_UNKNOWN = 0;
GUEST_METRIC_FORMAT_BYTES = 1;
GUEST_METRIC_FORMAT_INTEGER = 2;
GUEST_METRIC_FORMAT_DURATION_SECONDS = 3;
}

View File

@ -6,6 +6,7 @@ option java_multiple_files = true;
option java_package = "dev.krata.proto.v1.control";
option java_outer_classname = "ControlProto";
import "krata/bus/idm.proto";
import "krata/v1/common.proto";
service ControlService {
@ -13,8 +14,14 @@ service ControlService {
rpc DestroyGuest(DestroyGuestRequest) returns (DestroyGuestReply);
rpc ResolveGuest(ResolveGuestRequest) returns (ResolveGuestReply);
rpc ListGuests(ListGuestsRequest) returns (ListGuestsReply);
rpc ConsoleData(stream ConsoleDataRequest) returns (stream ConsoleDataReply);
rpc ReadGuestMetrics(ReadGuestMetricsRequest) returns (ReadGuestMetricsReply);
rpc SnoopIdm(SnoopIdmRequest) returns (stream SnoopIdmReply);
rpc WatchEvents(WatchEventsRequest) returns (stream WatchEventsReply);
rpc PullImage(PullImageRequest) returns (stream PullImageReply);
}
message CreateGuestRequest {
@ -65,3 +72,63 @@ message WatchEventsReply {
message GuestChangedEvent {
krata.v1.common.Guest guest = 1;
}
message ReadGuestMetricsRequest {
string guest_id = 1;
}
message ReadGuestMetricsReply {
krata.v1.common.GuestMetricNode root = 1;
}
message SnoopIdmRequest {}
message SnoopIdmReply {
uint32 from = 1;
uint32 to = 2;
krata.bus.idm.IdmPacket packet = 3;
}
enum PullImageProgressLayerPhase {
PULL_IMAGE_PROGRESS_LAYER_PHASE_UNKNOWN = 0;
PULL_IMAGE_PROGRESS_LAYER_PHASE_WAITING = 1;
PULL_IMAGE_PROGRESS_LAYER_PHASE_DOWNLOADING = 2;
PULL_IMAGE_PROGRESS_LAYER_PHASE_DOWNLOADED = 3;
PULL_IMAGE_PROGRESS_LAYER_PHASE_EXTRACTING = 4;
PULL_IMAGE_PROGRESS_LAYER_PHASE_EXTRACTED = 5;
}
message PullImageProgressLayer {
string id = 1;
PullImageProgressLayerPhase phase = 2;
uint64 value = 3;
uint64 total = 4;
}
enum PullImageProgressPhase {
PULL_IMAGE_PROGRESS_PHASE_UNKNOWN = 0;
PULL_IMAGE_PROGRESS_PHASE_RESOLVING = 1;
PULL_IMAGE_PROGRESS_PHASE_RESOLVED = 2;
PULL_IMAGE_PROGRESS_PHASE_CONFIG_ACQUIRE = 3;
PULL_IMAGE_PROGRESS_PHASE_LAYER_ACQUIRE = 4;
PULL_IMAGE_PROGRESS_PHASE_PACKING = 5;
PULL_IMAGE_PROGRESS_PHASE_COMPLETE = 6;
}
message PullImageProgress {
PullImageProgressPhase phase = 1;
repeated PullImageProgressLayer layers = 2;
uint64 value = 3;
uint64 total = 4;
}
message PullImageRequest {
string image = 1;
krata.v1.common.GuestOciImageFormat format = 2;
}
message PullImageReply {
PullImageProgress progress = 1;
string digest = 2;
krata.v1.common.GuestOciImageFormat format = 3;
}

View File

@ -0,0 +1,89 @@
use prost_types::{ListValue, Value};
include!(concat!(env!("OUT_DIR"), "/krata.bus.idm.rs"));
pub trait AsIdmMetricValue {
fn as_metric_value(&self) -> Value;
}
impl IdmMetricNode {
pub fn structural<N: AsRef<str>>(name: N, children: Vec<IdmMetricNode>) -> IdmMetricNode {
IdmMetricNode {
name: name.as_ref().to_string(),
value: None,
format: IdmMetricFormat::Unknown.into(),
children,
}
}
pub fn raw_value<N: AsRef<str>, V: AsIdmMetricValue>(name: N, value: V) -> IdmMetricNode {
IdmMetricNode {
name: name.as_ref().to_string(),
value: Some(value.as_metric_value()),
format: IdmMetricFormat::Unknown.into(),
children: vec![],
}
}
pub fn value<N: AsRef<str>, V: AsIdmMetricValue>(
name: N,
value: V,
format: IdmMetricFormat,
) -> IdmMetricNode {
IdmMetricNode {
name: name.as_ref().to_string(),
value: Some(value.as_metric_value()),
format: format.into(),
children: vec![],
}
}
}
impl AsIdmMetricValue for String {
fn as_metric_value(&self) -> Value {
Value {
kind: Some(prost_types::value::Kind::StringValue(self.to_string())),
}
}
}
impl AsIdmMetricValue for &str {
fn as_metric_value(&self) -> Value {
Value {
kind: Some(prost_types::value::Kind::StringValue(self.to_string())),
}
}
}
impl AsIdmMetricValue for u64 {
fn as_metric_value(&self) -> Value {
numeric(*self as f64)
}
}
impl AsIdmMetricValue for i64 {
fn as_metric_value(&self) -> Value {
numeric(*self as f64)
}
}
impl AsIdmMetricValue for f64 {
fn as_metric_value(&self) -> Value {
numeric(*self)
}
}
impl<T: AsIdmMetricValue> AsIdmMetricValue for Vec<T> {
fn as_metric_value(&self) -> Value {
let values = self.iter().map(|x| x.as_metric_value()).collect::<_>();
Value {
kind: Some(prost_types::value::Kind::ListValue(ListValue { values })),
}
}
}
fn numeric(value: f64) -> Value {
Value {
kind: Some(prost_types::value::Kind::NumberValue(value)),
}
}

View File

@ -0,0 +1 @@
pub mod idm;

View File

@ -1,8 +1,18 @@
use std::path::Path;
use std::{
collections::HashMap,
path::Path,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
time::Duration,
};
use super::protocol::IdmPacket;
use super::protocol::{
idm_packet::Content, idm_request::Request, idm_response::Response, IdmEvent, IdmPacket,
IdmRequest, IdmResponse,
};
use anyhow::{anyhow, Result};
use bytes::BytesMut;
use log::{debug, error};
use nix::sys::termios::{cfmakeraw, tcgetattr, tcsetattr, SetArg};
use prost::Message;
@ -10,44 +20,39 @@ use tokio::{
fs::File,
io::{unix::AsyncFd, AsyncReadExt, AsyncWriteExt},
select,
sync::mpsc::{channel, Receiver, Sender},
sync::{
broadcast,
mpsc::{channel, Receiver, Sender},
oneshot, Mutex,
},
task::JoinHandle,
time::timeout,
};
type RequestMap = Arc<Mutex<HashMap<u64, oneshot::Sender<IdmResponse>>>>;
const IDM_PACKET_QUEUE_LEN: usize = 100;
const IDM_REQUEST_TIMEOUT_SECS: u64 = 10;
const IDM_PACKET_MAX_SIZE: usize = 20 * 1024 * 1024;
pub struct IdmClient {
pub receiver: Receiver<IdmPacket>,
pub sender: Sender<IdmPacket>,
task: JoinHandle<()>,
#[async_trait::async_trait]
pub trait IdmBackend: Send {
async fn recv(&mut self) -> Result<IdmPacket>;
async fn send(&mut self, packet: IdmPacket) -> Result<()>;
}
impl Drop for IdmClient {
fn drop(&mut self) {
self.task.abort();
}
pub struct IdmFileBackend {
read_fd: Arc<Mutex<AsyncFd<File>>>,
write: Arc<Mutex<File>>,
}
impl IdmClient {
pub async fn open<P: AsRef<Path>>(path: P) -> Result<IdmClient> {
let file = File::options()
.read(true)
.write(true)
.create(false)
.open(path)
.await?;
IdmClient::set_raw_port(&file)?;
let (rx_sender, rx_receiver) = channel(IDM_PACKET_QUEUE_LEN);
let (tx_sender, tx_receiver) = channel(IDM_PACKET_QUEUE_LEN);
let task = tokio::task::spawn(async move {
if let Err(error) = IdmClient::process(file, rx_sender, tx_receiver).await {
debug!("failed to handle idm client processing: {}", error);
}
});
Ok(IdmClient {
receiver: rx_receiver,
sender: tx_sender,
task,
impl IdmFileBackend {
pub async fn new(read_file: File, write_file: File) -> Result<IdmFileBackend> {
IdmFileBackend::set_raw_port(&read_file)?;
IdmFileBackend::set_raw_port(&write_file)?;
Ok(IdmFileBackend {
read_fd: Arc::new(Mutex::new(AsyncFd::new(read_file)?)),
write: Arc::new(Mutex::new(write_file)),
})
}
@ -57,31 +62,208 @@ impl IdmClient {
tcsetattr(file, SetArg::TCSANOW, &termios)?;
Ok(())
}
}
#[async_trait::async_trait]
impl IdmBackend for IdmFileBackend {
async fn recv(&mut self) -> Result<IdmPacket> {
let mut fd = self.read_fd.lock().await;
let mut guard = fd.readable_mut().await?;
let b1 = guard.get_inner_mut().read_u8().await?;
if b1 != 0xff {
return Ok(IdmPacket::default());
}
let b2 = guard.get_inner_mut().read_u8().await?;
if b2 != 0xff {
return Ok(IdmPacket::default());
}
let size = guard.get_inner_mut().read_u32_le().await?;
if size == 0 {
return Ok(IdmPacket::default());
}
let mut buffer = vec![0u8; size as usize];
guard.get_inner_mut().read_exact(&mut buffer).await?;
match IdmPacket::decode(buffer.as_slice()) {
Ok(packet) => Ok(packet),
Err(error) => Err(anyhow!("received invalid idm packet: {}", error)),
}
}
async fn send(&mut self, packet: IdmPacket) -> Result<()> {
let mut file = self.write.lock().await;
let data = packet.encode_to_vec();
file.write_all(&[0xff, 0xff]).await?;
file.write_u32_le(data.len() as u32).await?;
file.write_all(&data).await?;
Ok(())
}
}
#[derive(Clone)]
pub struct IdmClient {
request_backend_sender: broadcast::Sender<IdmRequest>,
next_request_id: Arc<Mutex<u64>>,
event_receiver_sender: broadcast::Sender<IdmEvent>,
tx_sender: Sender<IdmPacket>,
requests: RequestMap,
task: Arc<JoinHandle<()>>,
}
impl Drop for IdmClient {
fn drop(&mut self) {
if Arc::strong_count(&self.task) <= 1 {
self.task.abort();
}
}
}
impl IdmClient {
pub async fn new(backend: Box<dyn IdmBackend>) -> Result<IdmClient> {
let requests = Arc::new(Mutex::new(HashMap::new()));
let (event_sender, event_receiver) = broadcast::channel(IDM_PACKET_QUEUE_LEN);
let (internal_request_backend_sender, _) = broadcast::channel(IDM_PACKET_QUEUE_LEN);
let (tx_sender, tx_receiver) = channel(IDM_PACKET_QUEUE_LEN);
let backend_event_sender = event_sender.clone();
let request_backend_sender = internal_request_backend_sender.clone();
let requests_for_client = requests.clone();
let task = tokio::task::spawn(async move {
if let Err(error) = IdmClient::process(
backend,
backend_event_sender,
requests,
internal_request_backend_sender,
event_receiver,
tx_receiver,
)
.await
{
debug!("failed to handle idm client processing: {}", error);
}
});
Ok(IdmClient {
next_request_id: Arc::new(Mutex::new(0)),
event_receiver_sender: event_sender.clone(),
request_backend_sender,
requests: requests_for_client,
tx_sender,
task: Arc::new(task),
})
}
pub async fn open<P: AsRef<Path>>(path: P) -> Result<IdmClient> {
let read_file = File::options()
.read(true)
.write(false)
.create(false)
.open(&path)
.await?;
let write_file = File::options()
.read(false)
.write(true)
.create(false)
.open(path)
.await?;
let backend = IdmFileBackend::new(read_file, write_file).await?;
IdmClient::new(Box::new(backend) as Box<dyn IdmBackend>).await
}
pub async fn emit(&self, event: IdmEvent) -> Result<()> {
self.tx_sender
.send(IdmPacket {
content: Some(Content::Event(event)),
})
.await?;
Ok(())
}
pub async fn requests(&self) -> Result<broadcast::Receiver<IdmRequest>> {
Ok(self.request_backend_sender.subscribe())
}
pub async fn respond(&self, id: u64, response: Response) -> Result<()> {
let packet = IdmPacket {
content: Some(Content::Response(IdmResponse {
id,
response: Some(response),
})),
};
self.tx_sender.send(packet).await?;
Ok(())
}
pub async fn subscribe(&self) -> Result<broadcast::Receiver<IdmEvent>> {
Ok(self.event_receiver_sender.subscribe())
}
pub async fn send(&self, request: Request) -> Result<Response> {
let (sender, receiver) = oneshot::channel::<IdmResponse>();
let req = {
let mut guard = self.next_request_id.lock().await;
let req = *guard;
*guard = req.wrapping_add(1);
req
};
let mut requests = self.requests.lock().await;
requests.insert(req, sender);
drop(requests);
let success = AtomicBool::new(false);
let _guard = scopeguard::guard(self.requests.clone(), |requests| {
if success.load(Ordering::Acquire) {
return;
}
tokio::task::spawn(async move {
let mut requests = requests.lock().await;
requests.remove(&req);
});
});
self.tx_sender
.send(IdmPacket {
content: Some(Content::Request(IdmRequest {
id: req,
request: Some(request),
})),
})
.await?;
let response = timeout(Duration::from_secs(IDM_REQUEST_TIMEOUT_SECS), receiver).await??;
success.store(true, Ordering::Release);
if let Some(response) = response.response {
Ok(response)
} else {
Err(anyhow!("response did not contain any content"))
}
}
async fn process(
file: File,
sender: Sender<IdmPacket>,
mut backend: Box<dyn IdmBackend>,
event_sender: broadcast::Sender<IdmEvent>,
requests: RequestMap,
request_backend_sender: broadcast::Sender<IdmRequest>,
_event_receiver: broadcast::Receiver<IdmEvent>,
mut receiver: Receiver<IdmPacket>,
) -> Result<()> {
let mut file = AsyncFd::new(file)?;
loop {
select! {
x = file.readable_mut() => match x {
Ok(mut guard) => {
let size = guard.get_inner_mut().read_u16_le().await?;
if size == 0 {
continue;
}
let mut buffer = BytesMut::with_capacity(size as usize);
guard.get_inner_mut().read_exact(&mut buffer).await?;
match IdmPacket::decode(buffer) {
Ok(packet) => {
sender.send(packet).await?;
x = backend.recv() => match x {
Ok(packet) => {
match packet.content {
Some(Content::Event(event)) => {
let _ = event_sender.send(event);
},
Err(error) => {
error!("received invalid idm packet: {}", error);
}
Some(Content::Request(request)) => {
let _ = request_backend_sender.send(request);
},
Some(Content::Response(response)) => {
let mut requests = requests.lock().await;
if let Some(sender) = requests.remove(&response.id) {
drop(requests);
let _ = sender.send(response);
}
},
_ => {},
}
},
@ -91,13 +273,12 @@ impl IdmClient {
},
x = receiver.recv() => match x {
Some(packet) => {
let data = packet.encode_to_vec();
if data.len() > u16::MAX as usize {
error!("unable to send idm packet, packet size exceeded (tried to send {} bytes)", data.len());
let length = packet.encoded_len();
if length > IDM_PACKET_MAX_SIZE {
error!("unable to send idm packet, packet size exceeded (tried to send {} bytes)", length);
continue;
}
file.get_mut().write_u16_le(data.len() as u16).await?;
file.get_mut().write_all(&data).await?;
backend.send(packet).await?;
},
None => {

View File

@ -1,3 +1,3 @@
#[cfg(unix)]
pub mod client;
pub mod protocol;
pub use crate::bus::idm as protocol;

View File

@ -1 +0,0 @@
include!(concat!(env!("OUT_DIR"), "/krata.internal.idm.rs"));

View File

@ -2,24 +2,30 @@ use std::collections::HashMap;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub enum LaunchPackedFormat {
Squashfs,
Erofs,
}
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchNetworkIpv4 {
pub address: String,
pub gateway: String,
}
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchNetworkIpv6 {
pub address: String,
pub gateway: String,
}
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchNetworkResolver {
pub nameservers: Vec<String>,
}
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchNetwork {
pub link: String,
pub ipv4: LaunchNetworkIpv4,
@ -27,8 +33,14 @@ pub struct LaunchNetwork {
pub resolver: LaunchNetworkResolver,
}
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchRoot {
pub format: LaunchPackedFormat,
}
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LaunchInfo {
pub root: LaunchRoot,
pub hostname: Option<String>,
pub network: Option<LaunchNetwork>,
pub env: HashMap<String, String>,

View File

@ -1,6 +1,7 @@
use once_cell::sync::Lazy;
use prost_reflect::DescriptorPool;
pub mod bus;
pub mod v1;
pub mod client;

View File

@ -1 +1,2 @@
#![allow(clippy::all)]
tonic::include_proto!("krata.v1.common");

View File

@ -1 +1,2 @@
#![allow(clippy::all)]
tonic::include_proto!("krata.v1.control");

View File

@ -16,7 +16,7 @@ clap = { workspace = true }
env_logger = { workspace = true }
etherparse = { workspace = true }
futures = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" }
krata = { path = "../krata", version = "^0.0.9" }
krata-advmac = { workspace = true }
libc = { workspace = true }
log = { workspace = true }

View File

@ -14,11 +14,13 @@ async-compression = { workspace = true, features = ["tokio", "gzip", "zstd"] }
async-trait = { workspace = true }
backhand = { workspace = true }
bytes = { workspace = true }
indexmap = { workspace = true }
krata-tokio-tar = { workspace = true }
log = { workspace = true }
oci-spec = { workspace = true }
path-clean = { workspace = true }
reqwest = { workspace = true }
scopeguard = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
sha256 = { workspace = true }

View File

@ -2,8 +2,13 @@ use std::{env::args, path::PathBuf};
use anyhow::Result;
use env_logger::Env;
use krataoci::{cache::ImageCache, compiler::ImageCompiler, name::ImageName};
use tokio::fs;
use krataoci::{
name::ImageName,
packer::{service::OciPackerService, OciPackedFormat},
progress::{OciProgress, OciProgressContext},
registry::OciPlatform,
};
use tokio::{fs, sync::mpsc::channel};
#[tokio::main]
async fn main() -> Result<()> {
@ -17,13 +22,32 @@ async fn main() -> Result<()> {
fs::create_dir(&cache_dir).await?;
}
let cache = ImageCache::new(&cache_dir)?;
let compiler = ImageCompiler::new(&cache, seed)?;
let info = compiler.compile(&image).await?;
let (sender, mut receiver) = channel::<OciProgress>(100);
tokio::task::spawn(async move {
loop {
let mut progresses = Vec::new();
let _ = receiver.recv_many(&mut progresses, 100).await;
let Some(progress) = progresses.last() else {
continue;
};
println!("phase {:?}", progress.phase);
for (id, layer) in &progress.layers {
println!(
"{} {:?} {} of {}",
id, layer.phase, layer.value, layer.total
)
}
}
});
let context = OciProgressContext::new(sender);
let service = OciPackerService::new(seed, &cache_dir, OciPlatform::current())?;
let packed = service
.request(image.clone(), OciPackedFormat::Squashfs, context)
.await?;
println!(
"generated squashfs of {} to {}",
image,
info.image_squashfs.to_string_lossy()
packed.path.to_string_lossy()
);
Ok(())
}

239
crates/oci/src/assemble.rs Normal file
View File

@ -0,0 +1,239 @@
use crate::fetch::{OciImageFetcher, OciImageLayer, OciResolvedImage};
use crate::progress::OciBoundProgress;
use crate::vfs::{VfsNode, VfsTree};
use anyhow::{anyhow, Result};
use log::{debug, trace, warn};
use oci_spec::image::{ImageConfiguration, ImageManifest};
use std::path::{Path, PathBuf};
use std::pin::Pin;
use std::sync::Arc;
use tokio::fs;
use tokio::io::AsyncRead;
use tokio_stream::StreamExt;
use tokio_tar::{Archive, Entry};
use uuid::Uuid;
pub struct OciImageAssembled {
pub digest: String,
pub manifest: ImageManifest,
pub config: ImageConfiguration,
pub vfs: Arc<VfsTree>,
pub tmp_dir: Option<PathBuf>,
}
impl Drop for OciImageAssembled {
fn drop(&mut self) {
if let Some(tmp) = self.tmp_dir.clone() {
tokio::task::spawn(async move {
let _ = fs::remove_dir_all(&tmp).await;
});
}
}
}
pub struct OciImageAssembler {
downloader: OciImageFetcher,
resolved: OciResolvedImage,
progress: OciBoundProgress,
work_dir: PathBuf,
disk_dir: PathBuf,
tmp_dir: Option<PathBuf>,
}
impl OciImageAssembler {
pub async fn new(
downloader: OciImageFetcher,
resolved: OciResolvedImage,
progress: OciBoundProgress,
work_dir: Option<PathBuf>,
disk_dir: Option<PathBuf>,
) -> Result<OciImageAssembler> {
let tmp_dir = if work_dir.is_none() || disk_dir.is_none() {
let mut tmp_dir = std::env::temp_dir().clone();
tmp_dir.push(format!("oci-assemble-{}", Uuid::new_v4()));
Some(tmp_dir)
} else {
None
};
let work_dir = if let Some(work_dir) = work_dir {
work_dir
} else {
let mut tmp_dir = tmp_dir
.clone()
.ok_or(anyhow!("tmp_dir was not created when expected"))?;
tmp_dir.push("work");
tmp_dir
};
let target_dir = if let Some(target_dir) = disk_dir {
target_dir
} else {
let mut tmp_dir = tmp_dir
.clone()
.ok_or(anyhow!("tmp_dir was not created when expected"))?;
tmp_dir.push("image");
tmp_dir
};
fs::create_dir_all(&work_dir).await?;
fs::create_dir_all(&target_dir).await?;
Ok(OciImageAssembler {
downloader,
resolved,
progress,
work_dir,
disk_dir: target_dir,
tmp_dir,
})
}
pub async fn assemble(self) -> Result<OciImageAssembled> {
debug!("assemble");
let mut layer_dir = self.work_dir.clone();
layer_dir.push("layer");
fs::create_dir_all(&layer_dir).await?;
self.assemble_with(&layer_dir).await
}
async fn assemble_with(self, layer_dir: &Path) -> Result<OciImageAssembled> {
let local = self
.downloader
.download(self.resolved.clone(), layer_dir)
.await?;
let mut vfs = VfsTree::new();
for layer in &local.layers {
debug!(
"process layer digest={} compression={:?}",
&layer.digest, layer.compression,
);
self.progress
.update(|progress| {
progress.extracting_layer(&layer.digest, 0, 1);
})
.await;
debug!("process layer digest={}", &layer.digest,);
let mut archive = layer.archive().await?;
let mut entries = archive.entries()?;
while let Some(entry) = entries.next().await {
let mut entry = entry?;
let path = entry.path()?;
let Some(name) = path.file_name() else {
continue;
};
let Some(name) = name.to_str() else {
continue;
};
if name.starts_with(".wh.") {
self.process_whiteout_entry(&mut vfs, &entry, name, layer)
.await?;
} else {
vfs.insert_tar_entry(&entry)?;
self.process_write_entry(&mut vfs, &mut entry, layer)
.await?;
}
}
self.progress
.update(|progress| {
progress.extracted_layer(&layer.digest);
})
.await;
}
for layer in &local.layers {
if layer.path.exists() {
fs::remove_file(&layer.path).await?;
}
}
Ok(OciImageAssembled {
vfs: Arc::new(vfs),
digest: self.resolved.digest,
manifest: self.resolved.manifest,
config: local.config,
tmp_dir: self.tmp_dir,
})
}
async fn process_whiteout_entry(
&self,
vfs: &mut VfsTree,
entry: &Entry<Archive<Pin<Box<dyn AsyncRead + Send>>>>,
name: &str,
layer: &OciImageLayer,
) -> Result<()> {
let path = entry.path()?;
let mut path = path.to_path_buf();
path.pop();
let opaque = name == ".wh..wh..opq";
if !opaque {
let file = &name[4..];
path.push(file);
}
trace!(
"whiteout entry {:?} layer={} path={:?}",
entry.path()?,
&layer.digest,
path
);
let result = vfs.root.remove(&path);
if let Some((parent, mut removed)) = result {
delete_disk_paths(&removed).await?;
if opaque {
removed.children.clear();
parent.children.push(removed);
}
} else {
warn!(
"whiteout entry layer={} path={:?} did not exist",
&layer.digest, path
);
}
Ok(())
}
async fn process_write_entry(
&self,
vfs: &mut VfsTree,
entry: &mut Entry<Archive<Pin<Box<dyn AsyncRead + Send>>>>,
layer: &OciImageLayer,
) -> Result<()> {
if !entry.header().entry_type().is_file() {
return Ok(());
}
trace!(
"unpack entry layer={} path={:?} type={:?}",
&layer.digest,
entry.path()?,
entry.header().entry_type(),
);
entry.set_preserve_permissions(false);
entry.set_unpack_xattrs(false);
entry.set_preserve_mtime(false);
let path = entry
.unpack_in(&self.disk_dir)
.await?
.ok_or(anyhow!("unpack did not return a path"))?;
vfs.set_disk_path(&entry.path()?, &path)?;
Ok(())
}
}
async fn delete_disk_paths(node: &VfsNode) -> Result<()> {
let mut queue = vec![node];
while !queue.is_empty() {
let node = queue.remove(0);
if let Some(ref disk_path) = node.disk_path {
if !disk_path.exists() {
warn!("disk path {:?} does not exist", disk_path);
}
fs::remove_file(disk_path).await?;
}
let children = node.children.iter().collect::<Vec<_>>();
queue.extend_from_slice(&children);
}
Ok(())
}

View File

@ -1,71 +0,0 @@
use super::compiler::ImageInfo;
use anyhow::Result;
use log::debug;
use oci_spec::image::{ImageConfiguration, ImageManifest};
use std::path::{Path, PathBuf};
use tokio::fs;
#[derive(Clone)]
pub struct ImageCache {
cache_dir: PathBuf,
}
impl ImageCache {
pub fn new(cache_dir: &Path) -> Result<ImageCache> {
Ok(ImageCache {
cache_dir: cache_dir.to_path_buf(),
})
}
pub async fn recall(&self, digest: &str) -> Result<Option<ImageInfo>> {
let mut squashfs_path = self.cache_dir.clone();
let mut config_path = self.cache_dir.clone();
let mut manifest_path = self.cache_dir.clone();
squashfs_path.push(format!("{}.squashfs", digest));
manifest_path.push(format!("{}.manifest.json", digest));
config_path.push(format!("{}.config.json", digest));
Ok(
if squashfs_path.exists() && manifest_path.exists() && config_path.exists() {
let squashfs_metadata = fs::metadata(&squashfs_path).await?;
let manifest_metadata = fs::metadata(&manifest_path).await?;
let config_metadata = fs::metadata(&config_path).await?;
if squashfs_metadata.is_file()
&& manifest_metadata.is_file()
&& config_metadata.is_file()
{
let manifest_text = fs::read_to_string(&manifest_path).await?;
let manifest: ImageManifest = serde_json::from_str(&manifest_text)?;
let config_text = fs::read_to_string(&config_path).await?;
let config: ImageConfiguration = serde_json::from_str(&config_text)?;
debug!("cache hit digest={}", digest);
Some(ImageInfo::new(squashfs_path.clone(), manifest, config)?)
} else {
None
}
} else {
debug!("cache miss digest={}", digest);
None
},
)
}
pub async fn store(&self, digest: &str, info: &ImageInfo) -> Result<ImageInfo> {
debug!("cache store digest={}", digest);
let mut squashfs_path = self.cache_dir.clone();
let mut manifest_path = self.cache_dir.clone();
let mut config_path = self.cache_dir.clone();
squashfs_path.push(format!("{}.squashfs", digest));
manifest_path.push(format!("{}.manifest.json", digest));
config_path.push(format!("{}.config.json", digest));
fs::copy(&info.image_squashfs, &squashfs_path).await?;
let manifest_text = serde_json::to_string_pretty(&info.manifest)?;
fs::write(&manifest_path, manifest_text).await?;
let config_text = serde_json::to_string_pretty(&info.config)?;
fs::write(&config_path, config_text).await?;
ImageInfo::new(
squashfs_path.clone(),
info.manifest.clone(),
info.config.clone(),
)
}
}

View File

@ -1,411 +0,0 @@
use crate::cache::ImageCache;
use crate::fetch::{OciImageDownloader, OciImageLayer};
use crate::name::ImageName;
use crate::registry::OciRegistryPlatform;
use anyhow::{anyhow, Result};
use backhand::compression::Compressor;
use backhand::{FilesystemCompressor, FilesystemWriter, NodeHeader};
use log::{debug, trace, warn};
use oci_spec::image::{ImageConfiguration, ImageManifest};
use std::borrow::Cow;
use std::fs::File;
use std::io::{BufWriter, ErrorKind, Read};
use std::os::unix::fs::{FileTypeExt, MetadataExt, PermissionsExt};
use std::path::{Path, PathBuf};
use std::pin::Pin;
use tokio::fs;
use tokio::io::AsyncRead;
use tokio_stream::StreamExt;
use tokio_tar::{Archive, Entry};
use uuid::Uuid;
use walkdir::WalkDir;
pub const IMAGE_SQUASHFS_VERSION: u64 = 2;
pub struct ImageInfo {
pub image_squashfs: PathBuf,
pub manifest: ImageManifest,
pub config: ImageConfiguration,
}
impl ImageInfo {
pub fn new(
squashfs: PathBuf,
manifest: ImageManifest,
config: ImageConfiguration,
) -> Result<ImageInfo> {
Ok(ImageInfo {
image_squashfs: squashfs,
manifest,
config,
})
}
}
pub struct ImageCompiler<'a> {
cache: &'a ImageCache,
seed: Option<PathBuf>,
}
impl ImageCompiler<'_> {
pub fn new(cache: &ImageCache, seed: Option<PathBuf>) -> Result<ImageCompiler> {
Ok(ImageCompiler { cache, seed })
}
pub async fn compile(&self, image: &ImageName) -> Result<ImageInfo> {
debug!("compile image={image}");
let mut tmp_dir = std::env::temp_dir().clone();
tmp_dir.push(format!("krata-compile-{}", Uuid::new_v4()));
let mut image_dir = tmp_dir.clone();
image_dir.push("image");
fs::create_dir_all(&image_dir).await?;
let mut layer_dir = tmp_dir.clone();
layer_dir.push("layer");
fs::create_dir_all(&layer_dir).await?;
let mut squash_file = tmp_dir.clone();
squash_file.push("image.squashfs");
let info = self
.download_and_compile(image, &layer_dir, &image_dir, &squash_file)
.await?;
fs::remove_dir_all(&tmp_dir).await?;
Ok(info)
}
async fn download_and_compile(
&self,
image: &ImageName,
layer_dir: &Path,
image_dir: &Path,
squash_file: &Path,
) -> Result<ImageInfo> {
let downloader = OciImageDownloader::new(
self.seed.clone(),
layer_dir.to_path_buf(),
OciRegistryPlatform::current(),
);
let resolved = downloader.resolve(image.clone()).await?;
let cache_key = format!(
"manifest={}:squashfs-version={}\n",
resolved.digest, IMAGE_SQUASHFS_VERSION
);
let cache_digest = sha256::digest(cache_key);
if let Some(cached) = self.cache.recall(&cache_digest).await? {
return Ok(cached);
}
let local = downloader.download(resolved).await?;
for layer in &local.layers {
debug!(
"process layer digest={} compression={:?}",
&layer.digest, layer.compression,
);
let whiteouts = self.process_layer_whiteout(layer, image_dir).await?;
debug!(
"process layer digest={} whiteouts={:?}",
&layer.digest, whiteouts
);
let mut archive = layer.archive().await?;
let mut entries = archive.entries()?;
while let Some(entry) = entries.next().await {
let mut entry = entry?;
let path = entry.path()?;
let mut maybe_whiteout_path_str =
path.to_str().map(|x| x.to_string()).unwrap_or_default();
if whiteouts.contains(&maybe_whiteout_path_str) {
continue;
}
maybe_whiteout_path_str.push('/');
if whiteouts.contains(&maybe_whiteout_path_str) {
continue;
}
let Some(name) = path.file_name() else {
return Err(anyhow!("unable to get file name"));
};
let Some(name) = name.to_str() else {
return Err(anyhow!("unable to get file name as string"));
};
if name.starts_with(".wh.") {
continue;
} else {
self.process_write_entry(&mut entry, layer, image_dir)
.await?;
}
}
}
for layer in &local.layers {
if layer.path.exists() {
fs::remove_file(&layer.path).await?;
}
}
self.squash(image_dir, squash_file)?;
let info = ImageInfo::new(
squash_file.to_path_buf(),
local.image.manifest,
local.config,
)?;
self.cache.store(&cache_digest, &info).await
}
async fn process_layer_whiteout(
&self,
layer: &OciImageLayer,
image_dir: &Path,
) -> Result<Vec<String>> {
let mut whiteouts = Vec::new();
let mut archive = layer.archive().await?;
let mut entries = archive.entries()?;
while let Some(entry) = entries.next().await {
let entry = entry?;
let path = entry.path()?;
let Some(name) = path.file_name() else {
return Err(anyhow!("unable to get file name"));
};
let Some(name) = name.to_str() else {
return Err(anyhow!("unable to get file name as string"));
};
if name.starts_with(".wh.") {
let path = self
.process_whiteout_entry(&entry, name, layer, image_dir)
.await?;
if let Some(path) = path {
whiteouts.push(path);
}
}
}
Ok(whiteouts)
}
async fn process_whiteout_entry(
&self,
entry: &Entry<Archive<Pin<Box<dyn AsyncRead + Send>>>>,
name: &str,
layer: &OciImageLayer,
image_dir: &Path,
) -> Result<Option<String>> {
let path = entry.path()?;
let mut dst = self.check_safe_entry(path.clone(), image_dir)?;
dst.pop();
let mut path = path.to_path_buf();
path.pop();
let opaque = name == ".wh..wh..opq";
if !opaque {
let file = &name[4..];
dst.push(file);
path.push(file);
self.check_safe_path(&dst, image_dir)?;
}
trace!("whiteout entry layer={} path={:?}", &layer.digest, path,);
let whiteout = path
.to_str()
.ok_or(anyhow!("unable to convert path to string"))?
.to_string();
if opaque {
if dst.is_dir() {
let mut reader = fs::read_dir(dst).await?;
while let Some(entry) = reader.next_entry().await? {
let path = entry.path();
if path.is_symlink() || path.is_file() {
fs::remove_file(&path).await?;
} else if path.is_dir() {
fs::remove_dir_all(&path).await?;
} else {
return Err(anyhow!("opaque whiteout entry did not exist"));
}
}
} else {
debug!(
"whiteout opaque entry missing locally layer={} path={:?} local={:?}",
&layer.digest,
entry.path()?,
dst,
);
}
} else if dst.is_file() || dst.is_symlink() {
fs::remove_file(&dst).await?;
} else if dst.is_dir() {
fs::remove_dir_all(&dst).await?;
} else {
debug!(
"whiteout entry missing locally layer={} path={:?} local={:?}",
&layer.digest,
entry.path()?,
dst,
);
}
Ok(if opaque { None } else { Some(whiteout) })
}
async fn process_write_entry(
&self,
entry: &mut Entry<Archive<Pin<Box<dyn AsyncRead + Send>>>>,
layer: &OciImageLayer,
image_dir: &Path,
) -> Result<()> {
let uid = entry.header().uid()?;
let gid = entry.header().gid()?;
trace!(
"unpack entry layer={} path={:?} type={:?} uid={} gid={}",
&layer.digest,
entry.path()?,
entry.header().entry_type(),
uid,
gid,
);
entry.set_preserve_mtime(true);
entry.set_preserve_permissions(true);
entry.set_unpack_xattrs(true);
if let Some(path) = entry.unpack_in(image_dir).await? {
if !path.is_symlink() {
std::os::unix::fs::chown(path, Some(uid as u32), Some(gid as u32))?;
}
}
Ok(())
}
fn check_safe_entry(&self, path: Cow<Path>, image_dir: &Path) -> Result<PathBuf> {
let mut dst = image_dir.to_path_buf();
dst.push(path);
if let Some(name) = dst.file_name() {
if let Some(name) = name.to_str() {
if name.starts_with(".wh.") {
let copy = dst.clone();
dst.pop();
self.check_safe_path(&dst, image_dir)?;
return Ok(copy);
}
}
}
self.check_safe_path(&dst, image_dir)?;
Ok(dst)
}
fn check_safe_path(&self, dst: &Path, image_dir: &Path) -> Result<()> {
let resolved = path_clean::clean(dst);
if !resolved.starts_with(image_dir) {
return Err(anyhow!("layer attempts to work outside image dir"));
}
Ok(())
}
fn squash(&self, image_dir: &Path, squash_file: &Path) -> Result<()> {
let mut writer = FilesystemWriter::default();
writer.set_compressor(FilesystemCompressor::new(Compressor::Gzip, None)?);
let walk = WalkDir::new(image_dir).follow_links(false);
for entry in walk {
let entry = entry?;
let rel = entry
.path()
.strip_prefix(image_dir)?
.to_str()
.ok_or_else(|| anyhow!("failed to strip prefix of tmpdir"))?;
let rel = format!("/{}", rel);
trace!("squash write {}", rel);
let typ = entry.file_type();
let metadata = std::fs::symlink_metadata(entry.path())?;
let uid = metadata.uid();
let gid = metadata.gid();
let mode = metadata.permissions().mode();
let mtime = metadata.mtime();
if rel == "/" {
writer.set_root_uid(uid);
writer.set_root_gid(gid);
writer.set_root_mode(mode as u16);
continue;
}
let header = NodeHeader {
permissions: mode as u16,
uid,
gid,
mtime: mtime as u32,
};
if typ.is_symlink() {
let symlink = std::fs::read_link(entry.path())?;
let symlink = symlink
.to_str()
.ok_or_else(|| anyhow!("failed to read symlink"))?;
writer.push_symlink(symlink, rel, header)?;
} else if typ.is_dir() {
writer.push_dir(rel, header)?;
} else if typ.is_file() {
writer.push_file(ConsumingFileReader::new(entry.path()), rel, header)?;
} else if typ.is_block_device() {
let device = metadata.dev();
writer.push_block_device(device as u32, rel, header)?;
} else if typ.is_char_device() {
let device = metadata.dev();
writer.push_char_device(device as u32, rel, header)?;
} else if typ.is_fifo() {
writer.push_fifo(rel, header)?;
} else if typ.is_socket() {
writer.push_socket(rel, header)?;
} else {
return Err(anyhow!("invalid file type"));
}
}
let squash_file_path = squash_file
.to_str()
.ok_or_else(|| anyhow!("failed to convert squashfs string"))?;
let file = File::create(squash_file)?;
let mut bufwrite = BufWriter::new(file);
trace!("squash generate: {}", squash_file_path);
writer.write(&mut bufwrite)?;
std::fs::remove_dir_all(image_dir)?;
Ok(())
}
}
struct ConsumingFileReader {
path: PathBuf,
file: Option<File>,
}
impl ConsumingFileReader {
fn new(path: &Path) -> ConsumingFileReader {
ConsumingFileReader {
path: path.to_path_buf(),
file: None,
}
}
}
impl Read for ConsumingFileReader {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
if self.file.is_none() {
self.file = Some(File::open(&self.path)?);
}
let Some(ref mut file) = self.file else {
return Err(std::io::Error::new(
ErrorKind::NotFound,
"file was not opened",
));
};
file.read(buf)
}
}
impl Drop for ConsumingFileReader {
fn drop(&mut self) {
let file = self.file.take();
drop(file);
if let Err(error) = std::fs::remove_file(&self.path) {
warn!("failed to delete consuming file {:?}: {}", self.path, error);
}
}
}

View File

@ -1,6 +1,8 @@
use crate::progress::{OciBoundProgress, OciProgressPhase};
use super::{
name::ImageName,
registry::{OciRegistryClient, OciRegistryPlatform},
registry::{OciPlatform, OciRegistryClient},
};
use std::{
@ -22,10 +24,10 @@ use tokio::{
use tokio_stream::StreamExt;
use tokio_tar::Archive;
pub struct OciImageDownloader {
pub struct OciImageFetcher {
seed: Option<PathBuf>,
storage: PathBuf,
platform: OciRegistryPlatform,
platform: OciPlatform,
progress: OciBoundProgress,
}
#[derive(Clone, Debug, PartialEq, Eq)]
@ -74,16 +76,16 @@ pub struct OciLocalImage {
pub layers: Vec<OciImageLayer>,
}
impl OciImageDownloader {
impl OciImageFetcher {
pub fn new(
seed: Option<PathBuf>,
storage: PathBuf,
platform: OciRegistryPlatform,
) -> OciImageDownloader {
OciImageDownloader {
platform: OciPlatform,
progress: OciBoundProgress,
) -> OciImageFetcher {
OciImageFetcher {
seed,
storage,
platform,
progress,
}
}
@ -208,9 +210,17 @@ impl OciImageDownloader {
})
}
pub async fn download(&self, image: OciResolvedImage) -> Result<OciLocalImage> {
pub async fn download(
&self,
image: OciResolvedImage,
layer_dir: &Path,
) -> Result<OciLocalImage> {
let config: ImageConfiguration;
self.progress
.update(|progress| {
progress.phase = OciProgressPhase::ConfigAcquire;
})
.await;
let mut client = OciRegistryClient::new(image.name.registry_url()?, self.platform.clone())?;
if let Some(seeded) = self
.load_seed_json_blob::<ImageConfiguration>(image.manifest.config())
@ -223,9 +233,31 @@ impl OciImageDownloader {
.await?;
config = serde_json::from_slice(&config_bytes)?;
}
self.progress
.update(|progress| {
progress.phase = OciProgressPhase::LayerAcquire;
for layer in image.manifest.layers() {
progress.add_layer(layer.digest(), layer.size() as usize);
}
})
.await;
let mut layers = Vec::new();
for layer in image.manifest.layers() {
layers.push(self.acquire_layer(&image.name, layer, &mut client).await?);
self.progress
.update(|progress| {
progress.downloading_layer(layer.digest(), 0, layer.size() as usize);
})
.await;
layers.push(
self.acquire_layer(&image.name, layer, layer_dir, &mut client)
.await?,
);
self.progress
.update(|progress| {
progress.downloaded_layer(layer.digest());
})
.await;
}
Ok(OciLocalImage {
image,
@ -238,6 +270,7 @@ impl OciImageDownloader {
&self,
image: &ImageName,
layer: &Descriptor,
layer_dir: &Path,
client: &mut OciRegistryClient,
) -> Result<OciImageLayer> {
debug!(
@ -245,13 +278,15 @@ impl OciImageDownloader {
layer.digest(),
layer.size()
);
let mut layer_path = self.storage.clone();
let mut layer_path = layer_dir.to_path_buf();
layer_path.push(format!("{}.layer", layer.digest()));
let seeded = self.extract_seed_blob(layer, &layer_path).await?;
if !seeded {
let file = File::create(&layer_path).await?;
let size = client.write_blob_to_file(&image.name, layer, file).await?;
let size = client
.write_blob_to_file(&image.name, layer, file, Some(self.progress.clone()))
.await?;
if layer.size() as u64 != size {
return Err(anyhow!(
"downloaded layer size differs from size in manifest",

View File

@ -1,5 +1,7 @@
pub mod cache;
pub mod compiler;
pub mod assemble;
pub mod fetch;
pub mod name;
pub mod packer;
pub mod progress;
pub mod registry;
pub mod vfs;

View File

@ -0,0 +1,201 @@
use std::{path::Path, process::Stdio, sync::Arc};
use super::OciPackedFormat;
use crate::{
progress::{OciBoundProgress, OciProgressPhase},
vfs::VfsTree,
};
use anyhow::{anyhow, Result};
use log::warn;
use tokio::{pin, process::Command, select};
#[derive(Debug, Clone, Copy)]
pub enum OciPackerBackendType {
MkSquashfs,
MkfsErofs,
}
impl OciPackerBackendType {
pub fn format(&self) -> OciPackedFormat {
match self {
OciPackerBackendType::MkSquashfs => OciPackedFormat::Squashfs,
OciPackerBackendType::MkfsErofs => OciPackedFormat::Erofs,
}
}
pub fn create(&self) -> Box<dyn OciPackerBackend> {
match self {
OciPackerBackendType::MkSquashfs => {
Box::new(OciPackerMkSquashfs {}) as Box<dyn OciPackerBackend>
}
OciPackerBackendType::MkfsErofs => {
Box::new(OciPackerMkfsErofs {}) as Box<dyn OciPackerBackend>
}
}
}
}
#[async_trait::async_trait]
pub trait OciPackerBackend: Send + Sync {
async fn pack(&self, progress: OciBoundProgress, vfs: Arc<VfsTree>, file: &Path) -> Result<()>;
}
pub struct OciPackerMkSquashfs {}
#[async_trait::async_trait]
impl OciPackerBackend for OciPackerMkSquashfs {
async fn pack(&self, progress: OciBoundProgress, vfs: Arc<VfsTree>, file: &Path) -> Result<()> {
progress
.update(|progress| {
progress.phase = OciProgressPhase::Packing;
progress.total = 1;
progress.value = 0;
})
.await;
let mut child = Command::new("mksquashfs")
.arg("-")
.arg(file)
.arg("-comp")
.arg("gzip")
.arg("-tar")
.stdin(Stdio::piped())
.stderr(Stdio::null())
.stdout(Stdio::null())
.spawn()?;
let stdin = child
.stdin
.take()
.ok_or(anyhow!("unable to acquire stdin stream"))?;
let mut writer = Some(tokio::task::spawn(async move {
if let Err(error) = vfs.write_to_tar(stdin).await {
warn!("failed to write tar: {}", error);
return Err(error);
}
Ok(())
}));
let wait = child.wait();
pin!(wait);
let status_result = loop {
if let Some(inner) = writer.as_mut() {
select! {
x = inner => {
writer = None;
match x {
Ok(_) => {},
Err(error) => {
return Err(error.into());
}
}
},
status = &mut wait => {
break status;
}
};
} else {
select! {
status = &mut wait => {
break status;
}
};
}
};
if let Some(writer) = writer {
writer.await??;
}
let status = status_result?;
if !status.success() {
Err(anyhow!(
"mksquashfs failed with exit code: {}",
status.code().unwrap()
))
} else {
progress
.update(|progress| {
progress.phase = OciProgressPhase::Packing;
progress.total = 1;
progress.value = 1;
})
.await;
Ok(())
}
}
}
pub struct OciPackerMkfsErofs {}
#[async_trait::async_trait]
impl OciPackerBackend for OciPackerMkfsErofs {
async fn pack(&self, progress: OciBoundProgress, vfs: Arc<VfsTree>, path: &Path) -> Result<()> {
progress
.update(|progress| {
progress.phase = OciProgressPhase::Packing;
progress.total = 1;
progress.value = 0;
})
.await;
let mut child = Command::new("mkfs.erofs")
.arg("-L")
.arg("root")
.arg("--tar=-")
.arg(path)
.stdin(Stdio::piped())
.stderr(Stdio::null())
.stdout(Stdio::null())
.spawn()?;
let stdin = child
.stdin
.take()
.ok_or(anyhow!("unable to acquire stdin stream"))?;
let mut writer = Some(tokio::task::spawn(
async move { vfs.write_to_tar(stdin).await },
));
let wait = child.wait();
pin!(wait);
let status_result = loop {
if let Some(inner) = writer.as_mut() {
select! {
x = inner => {
match x {
Ok(_) => {
writer = None;
},
Err(error) => {
return Err(error.into());
}
}
},
status = &mut wait => {
break status;
}
};
} else {
select! {
status = &mut wait => {
break status;
}
};
}
};
if let Some(writer) = writer {
writer.await??;
}
let status = status_result?;
if !status.success() {
Err(anyhow!(
"mkfs.erofs failed with exit code: {}",
status.code().unwrap()
))
} else {
progress
.update(|progress| {
progress.phase = OciProgressPhase::Packing;
progress.total = 1;
progress.value = 1;
})
.await;
Ok(())
}
}
}

View File

@ -0,0 +1,84 @@
use crate::packer::{OciImagePacked, OciPackedFormat};
use anyhow::Result;
use log::debug;
use oci_spec::image::{ImageConfiguration, ImageManifest};
use std::path::{Path, PathBuf};
use tokio::fs;
#[derive(Clone)]
pub struct OciPackerCache {
cache_dir: PathBuf,
}
impl OciPackerCache {
pub fn new(cache_dir: &Path) -> Result<OciPackerCache> {
Ok(OciPackerCache {
cache_dir: cache_dir.to_path_buf(),
})
}
pub async fn recall(
&self,
digest: &str,
format: OciPackedFormat,
) -> Result<Option<OciImagePacked>> {
let mut fs_path = self.cache_dir.clone();
let mut config_path = self.cache_dir.clone();
let mut manifest_path = self.cache_dir.clone();
fs_path.push(format!("{}.{}", digest, format.extension()));
manifest_path.push(format!("{}.manifest.json", digest));
config_path.push(format!("{}.config.json", digest));
Ok(
if fs_path.exists() && manifest_path.exists() && config_path.exists() {
let image_metadata = fs::metadata(&fs_path).await?;
let manifest_metadata = fs::metadata(&manifest_path).await?;
let config_metadata = fs::metadata(&config_path).await?;
if image_metadata.is_file()
&& manifest_metadata.is_file()
&& config_metadata.is_file()
{
let manifest_text = fs::read_to_string(&manifest_path).await?;
let manifest: ImageManifest = serde_json::from_str(&manifest_text)?;
let config_text = fs::read_to_string(&config_path).await?;
let config: ImageConfiguration = serde_json::from_str(&config_text)?;
debug!("cache hit digest={}", digest);
Some(OciImagePacked::new(
digest.to_string(),
fs_path.clone(),
format,
config,
manifest,
))
} else {
None
}
} else {
debug!("cache miss digest={}", digest);
None
},
)
}
pub async fn store(&self, packed: OciImagePacked) -> Result<OciImagePacked> {
debug!("cache store digest={}", packed.digest);
let mut fs_path = self.cache_dir.clone();
let mut manifest_path = self.cache_dir.clone();
let mut config_path = self.cache_dir.clone();
fs_path.push(format!("{}.{}", packed.digest, packed.format.extension()));
manifest_path.push(format!("{}.manifest.json", packed.digest));
config_path.push(format!("{}.config.json", packed.digest));
fs::copy(&packed.path, &fs_path).await?;
let manifest_text = serde_json::to_string_pretty(&packed.manifest)?;
fs::write(&manifest_path, manifest_text).await?;
let config_text = serde_json::to_string_pretty(&packed.config)?;
fs::write(&config_path, config_text).await?;
Ok(OciImagePacked::new(
packed.digest,
fs_path.clone(),
packed.format,
packed.config,
packed.manifest,
))
}
}

View File

@ -0,0 +1,58 @@
use std::path::PathBuf;
use self::backend::OciPackerBackendType;
use oci_spec::image::{ImageConfiguration, ImageManifest};
pub mod backend;
pub mod cache;
pub mod service;
#[derive(Debug, Default, Clone, Copy)]
pub enum OciPackedFormat {
#[default]
Squashfs,
Erofs,
}
impl OciPackedFormat {
pub fn extension(&self) -> &str {
match self {
OciPackedFormat::Squashfs => "squashfs",
OciPackedFormat::Erofs => "erofs",
}
}
pub fn backend(&self) -> OciPackerBackendType {
match self {
OciPackedFormat::Squashfs => OciPackerBackendType::MkSquashfs,
OciPackedFormat::Erofs => OciPackerBackendType::MkfsErofs,
}
}
}
#[derive(Clone)]
pub struct OciImagePacked {
pub digest: String,
pub path: PathBuf,
pub format: OciPackedFormat,
pub config: ImageConfiguration,
pub manifest: ImageManifest,
}
impl OciImagePacked {
pub fn new(
digest: String,
path: PathBuf,
format: OciPackedFormat,
config: ImageConfiguration,
manifest: ImageManifest,
) -> OciImagePacked {
OciImagePacked {
digest,
path,
format,
config,
manifest,
}
}
}

View File

@ -0,0 +1,81 @@
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Result};
use crate::{
assemble::OciImageAssembler,
fetch::OciImageFetcher,
name::ImageName,
progress::{OciBoundProgress, OciProgress, OciProgressContext},
registry::OciPlatform,
};
use super::{cache::OciPackerCache, OciImagePacked, OciPackedFormat};
#[derive(Clone)]
pub struct OciPackerService {
seed: Option<PathBuf>,
platform: OciPlatform,
cache: OciPackerCache,
}
impl OciPackerService {
pub fn new(
seed: Option<PathBuf>,
cache_dir: &Path,
platform: OciPlatform,
) -> Result<OciPackerService> {
Ok(OciPackerService {
seed,
cache: OciPackerCache::new(cache_dir)?,
platform,
})
}
pub async fn recall(
&self,
digest: &str,
format: OciPackedFormat,
) -> Result<Option<OciImagePacked>> {
self.cache.recall(digest, format).await
}
pub async fn request(
&self,
name: ImageName,
format: OciPackedFormat,
progress_context: OciProgressContext,
) -> Result<OciImagePacked> {
let progress = OciProgress::new();
let progress = OciBoundProgress::new(progress_context.clone(), progress);
let fetcher =
OciImageFetcher::new(self.seed.clone(), self.platform.clone(), progress.clone());
let resolved = fetcher.resolve(name).await?;
if let Some(cached) = self.cache.recall(&resolved.digest, format).await? {
return Ok(cached);
}
let assembler =
OciImageAssembler::new(fetcher, resolved, progress.clone(), None, None).await?;
let assembled = assembler.assemble().await?;
let mut file = assembled
.tmp_dir
.clone()
.ok_or(anyhow!("tmp_dir was missing when packing image"))?;
file.push("image.pack");
let target = file.clone();
let packer = format.backend().create();
packer
.pack(progress, assembled.vfs.clone(), &target)
.await?;
let packed = OciImagePacked::new(
assembled.digest.clone(),
file,
format,
assembled.config.clone(),
assembled.manifest.clone(),
);
let packed = self.cache.store(packed).await?;
Ok(packed)
}
}

140
crates/oci/src/progress.rs Normal file
View File

@ -0,0 +1,140 @@
use std::sync::Arc;
use indexmap::IndexMap;
use tokio::sync::{mpsc::Sender, Mutex};
#[derive(Clone, Debug)]
pub struct OciProgress {
pub phase: OciProgressPhase,
pub layers: IndexMap<String, OciProgressLayer>,
pub value: u64,
pub total: u64,
}
impl Default for OciProgress {
fn default() -> Self {
Self::new()
}
}
impl OciProgress {
pub fn new() -> Self {
OciProgress {
phase: OciProgressPhase::Resolving,
layers: IndexMap::new(),
value: 0,
total: 1,
}
}
pub fn add_layer(&mut self, id: &str, size: usize) {
self.layers.insert(
id.to_string(),
OciProgressLayer {
id: id.to_string(),
phase: OciProgressLayerPhase::Waiting,
value: 0,
total: size as u64,
},
);
}
pub fn downloading_layer(&mut self, id: &str, downloaded: usize, total: usize) {
if let Some(entry) = self.layers.get_mut(id) {
entry.phase = OciProgressLayerPhase::Downloading;
entry.value = downloaded as u64;
entry.total = total as u64;
}
}
pub fn downloaded_layer(&mut self, id: &str) {
if let Some(entry) = self.layers.get_mut(id) {
entry.phase = OciProgressLayerPhase::Downloaded;
entry.value = entry.total;
}
}
pub fn extracting_layer(&mut self, id: &str, extracted: usize, total: usize) {
if let Some(entry) = self.layers.get_mut(id) {
entry.phase = OciProgressLayerPhase::Extracting;
entry.value = extracted as u64;
entry.total = total as u64;
}
}
pub fn extracted_layer(&mut self, id: &str) {
if let Some(entry) = self.layers.get_mut(id) {
entry.phase = OciProgressLayerPhase::Extracted;
entry.value = entry.total;
}
}
}
#[derive(Clone, Debug)]
pub enum OciProgressPhase {
Resolving,
Resolved,
ConfigAcquire,
LayerAcquire,
Packing,
Complete,
}
#[derive(Clone, Debug)]
pub struct OciProgressLayer {
pub id: String,
pub phase: OciProgressLayerPhase,
pub value: u64,
pub total: u64,
}
#[derive(Clone, Debug)]
pub enum OciProgressLayerPhase {
Waiting,
Downloading,
Downloaded,
Extracting,
Extracted,
}
#[derive(Clone)]
pub struct OciProgressContext {
sender: Sender<OciProgress>,
}
impl OciProgressContext {
pub fn new(sender: Sender<OciProgress>) -> OciProgressContext {
OciProgressContext { sender }
}
pub fn update(&self, progress: &OciProgress) {
let _ = self.sender.try_send(progress.clone());
}
}
#[derive(Clone)]
pub struct OciBoundProgress {
context: OciProgressContext,
instance: Arc<Mutex<OciProgress>>,
}
impl OciBoundProgress {
pub fn new(context: OciProgressContext, progress: OciProgress) -> OciBoundProgress {
OciBoundProgress {
context,
instance: Arc::new(Mutex::new(progress)),
}
}
pub async fn update(&self, function: impl FnOnce(&mut OciProgress)) {
let mut progress = self.instance.lock().await;
function(&mut progress);
self.context.update(&progress);
}
pub fn update_blocking(&self, function: impl FnOnce(&mut OciProgress)) {
let mut progress = self.instance.blocking_lock();
function(&mut progress);
self.context.update(&progress);
}
}

View File

@ -7,26 +7,28 @@ use reqwest::{Client, RequestBuilder, Response, StatusCode};
use tokio::{fs::File, io::AsyncWriteExt};
use url::Url;
use crate::progress::OciBoundProgress;
#[derive(Clone, Debug)]
pub struct OciRegistryPlatform {
pub struct OciPlatform {
pub os: Os,
pub arch: Arch,
}
impl OciRegistryPlatform {
impl OciPlatform {
#[cfg(target_arch = "x86_64")]
const CURRENT_ARCH: Arch = Arch::Amd64;
#[cfg(target_arch = "aarch64")]
const CURRENT_ARCH: Arch = Arch::ARM64;
pub fn new(os: Os, arch: Arch) -> OciRegistryPlatform {
OciRegistryPlatform { os, arch }
pub fn new(os: Os, arch: Arch) -> OciPlatform {
OciPlatform { os, arch }
}
pub fn current() -> OciRegistryPlatform {
OciRegistryPlatform {
pub fn current() -> OciPlatform {
OciPlatform {
os: Os::Linux,
arch: OciRegistryPlatform::CURRENT_ARCH,
arch: OciPlatform::CURRENT_ARCH,
}
}
}
@ -34,12 +36,12 @@ impl OciRegistryPlatform {
pub struct OciRegistryClient {
agent: Client,
url: Url,
platform: OciRegistryPlatform,
platform: OciPlatform,
token: Option<String>,
}
impl OciRegistryClient {
pub fn new(url: Url, platform: OciRegistryPlatform) -> Result<OciRegistryClient> {
pub fn new(url: Url, platform: OciPlatform) -> Result<OciRegistryClient> {
Ok(OciRegistryClient {
agent: Client::new(),
url,
@ -138,6 +140,7 @@ impl OciRegistryClient {
name: N,
descriptor: &Descriptor,
mut dest: File,
progress: Option<OciBoundProgress>,
) -> Result<u64> {
let url = self.url.join(&format!(
"/v2/{}/blobs/{}",
@ -146,9 +149,25 @@ impl OciRegistryClient {
))?;
let mut response = self.call(self.agent.get(url.as_str())).await?;
let mut size: u64 = 0;
let mut last_progress_size: u64 = 0;
while let Some(chunk) = response.chunk().await? {
dest.write_all(&chunk).await?;
size += chunk.len() as u64;
if (size - last_progress_size) > (5 * 1024 * 1024) {
last_progress_size = size;
if let Some(ref progress) = progress {
progress
.update(|progress| {
progress.downloading_layer(
descriptor.digest(),
size as usize,
descriptor.size() as usize,
);
})
.await;
}
}
}
Ok(size)
}

261
crates/oci/src/vfs.rs Normal file
View File

@ -0,0 +1,261 @@
use std::path::{Path, PathBuf};
use anyhow::{anyhow, Result};
use tokio::{
fs::File,
io::{AsyncRead, AsyncWrite, AsyncWriteExt},
};
use tokio_tar::{Builder, Entry, EntryType, Header};
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum VfsNodeType {
Directory,
RegularFile,
Symlink,
Hardlink,
Fifo,
CharDevice,
BlockDevice,
}
#[derive(Clone, Debug)]
pub struct VfsNode {
pub name: String,
pub size: u64,
pub children: Vec<VfsNode>,
pub typ: VfsNodeType,
pub uid: u64,
pub gid: u64,
pub link_name: Option<String>,
pub mode: u32,
pub mtime: u64,
pub dev_major: Option<u32>,
pub dev_minor: Option<u32>,
pub disk_path: Option<PathBuf>,
}
impl VfsNode {
pub fn from<X: AsyncRead + Unpin>(entry: &Entry<X>) -> Result<VfsNode> {
let header = entry.header();
let name = entry
.path()?
.file_name()
.ok_or(anyhow!("unable to get file name for entry"))?
.to_string_lossy()
.to_string();
let typ = header.entry_type();
let vtype = if typ.is_symlink() {
VfsNodeType::Symlink
} else if typ.is_hard_link() {
VfsNodeType::Hardlink
} else if typ.is_dir() {
VfsNodeType::Directory
} else if typ.is_fifo() {
VfsNodeType::Fifo
} else if typ.is_block_special() {
VfsNodeType::BlockDevice
} else if typ.is_character_special() {
VfsNodeType::CharDevice
} else if typ.is_file() {
VfsNodeType::RegularFile
} else {
return Err(anyhow!("unable to determine vfs type for entry"));
};
Ok(VfsNode {
name,
size: header.size()?,
children: vec![],
typ: vtype,
uid: header.uid()?,
gid: header.gid()?,
link_name: header.link_name()?.map(|x| x.to_string_lossy().to_string()),
mode: header.mode()?,
mtime: header.mtime()?,
dev_major: header.device_major()?,
dev_minor: header.device_minor()?,
disk_path: None,
})
}
pub fn lookup(&self, path: &Path) -> Option<&VfsNode> {
let mut node = self;
for part in path {
node = node
.children
.iter()
.find(|child| child.name == part.to_string_lossy())?;
}
Some(node)
}
pub fn lookup_mut(&mut self, path: &Path) -> Option<&mut VfsNode> {
let mut node = self;
for part in path {
node = node
.children
.iter_mut()
.find(|child| child.name == part.to_string_lossy())?;
}
Some(node)
}
pub fn remove(&mut self, path: &Path) -> Option<(&mut VfsNode, VfsNode)> {
let parent = path.parent()?;
let node = self.lookup_mut(parent)?;
let file_name = path.file_name()?;
let file_name = file_name.to_string_lossy();
let position = node
.children
.iter()
.position(|child| file_name == child.name)?;
let removed = node.children.remove(position);
Some((node, removed))
}
pub fn create_tar_header(&self) -> Result<Header> {
let mut header = Header::new_ustar();
header.set_entry_type(match self.typ {
VfsNodeType::Directory => EntryType::Directory,
VfsNodeType::CharDevice => EntryType::Char,
VfsNodeType::BlockDevice => EntryType::Block,
VfsNodeType::Fifo => EntryType::Fifo,
VfsNodeType::Hardlink => EntryType::Link,
VfsNodeType::Symlink => EntryType::Symlink,
VfsNodeType::RegularFile => EntryType::Regular,
});
header.set_uid(self.uid);
header.set_gid(self.gid);
if let Some(device_major) = self.dev_major {
header.set_device_major(device_major)?;
}
if let Some(device_minor) = self.dev_minor {
header.set_device_minor(device_minor)?;
}
header.set_mtime(self.mtime);
header.set_mode(self.mode);
if let Some(link_name) = self.link_name.as_ref() {
header.set_link_name(&PathBuf::from(link_name))?;
}
header.set_size(self.size);
Ok(header)
}
pub async fn write_to_tar<W: AsyncWrite + Unpin + Send>(
&self,
path: &Path,
builder: &mut Builder<W>,
) -> Result<()> {
let mut header = self.create_tar_header()?;
header.set_path(path)?;
header.set_cksum();
if let Some(disk_path) = self.disk_path.as_ref() {
builder
.append(&header, File::open(disk_path).await?)
.await?;
} else {
builder.append(&header, &[] as &[u8]).await?;
}
Ok(())
}
}
#[derive(Clone, Debug)]
pub struct VfsTree {
pub root: VfsNode,
}
impl Default for VfsTree {
fn default() -> Self {
Self::new()
}
}
impl VfsTree {
pub fn new() -> VfsTree {
VfsTree {
root: VfsNode {
name: "".to_string(),
size: 0,
children: vec![],
typ: VfsNodeType::Directory,
uid: 0,
gid: 0,
link_name: None,
mode: 0,
mtime: 0,
dev_major: None,
dev_minor: None,
disk_path: None,
},
}
}
pub fn insert_tar_entry<X: AsyncRead + Unpin>(&mut self, entry: &Entry<X>) -> Result<()> {
let mut meta = VfsNode::from(entry)?;
let path = entry.path()?.to_path_buf();
let parent = if let Some(parent) = path.parent() {
self.root.lookup_mut(parent)
} else {
Some(&mut self.root)
};
let Some(parent) = parent else {
return Err(anyhow!("unable to find parent of entry"));
};
let position = parent
.children
.iter()
.position(|child| meta.name == child.name);
if let Some(position) = position {
let old = parent.children.remove(position);
if meta.typ == VfsNodeType::Directory {
meta.children = old.children;
}
}
parent.children.push(meta);
Ok(())
}
pub fn set_disk_path(&mut self, path: &Path, disk_path: &Path) -> Result<()> {
let Some(node) = self.root.lookup_mut(path) else {
return Err(anyhow!(
"unable to find node {:?} to set disk path to",
path
));
};
node.disk_path = Some(disk_path.to_path_buf());
Ok(())
}
pub async fn write_to_tar<W: AsyncWrite + Unpin + Send + 'static>(
&self,
write: W,
) -> Result<()> {
let mut builder = Builder::new(write);
let mut queue = vec![(PathBuf::from(""), &self.root)];
while !queue.is_empty() {
let (mut path, node) = queue.remove(0);
if !node.name.is_empty() {
path.push(&node.name);
}
if path.components().count() != 0 {
node.write_to_tar(&path, &mut builder).await?;
}
for child in &node.children {
queue.push((path.clone(), child));
}
}
let mut write = builder.into_inner().await?;
write.flush().await?;
drop(write);
Ok(())
}
}

View File

@ -12,18 +12,18 @@ resolver = "2"
anyhow = { workspace = true }
backhand = { workspace = true }
ipnetwork = { workspace = true }
krata = { path = "../krata", version = "^0.0.7" }
krata = { path = "../krata", version = "^0.0.9" }
krata-advmac = { workspace = true }
krata-oci = { path = "../oci", version = "^0.0.7" }
krata-oci = { path = "../oci", version = "^0.0.9" }
log = { workspace = true }
loopdev-3 = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true }
uuid = { workspace = true }
krata-xenclient = { path = "../xen/xenclient", version = "^0.0.7" }
krata-xenevtchn = { path = "../xen/xenevtchn", version = "^0.0.7" }
krata-xengnt = { path = "../xen/xengnt", version = "^0.0.7" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.7" }
krata-xenclient = { path = "../xen/xenclient", version = "^0.0.9" }
krata-xenevtchn = { path = "../xen/xenevtchn", version = "^0.0.9" }
krata-xengnt = { path = "../xen/xengnt", version = "^0.0.9" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.9" }
[lib]
name = "kratart"
@ -31,10 +31,6 @@ name = "kratart"
[dev-dependencies]
env_logger = { workspace = true }
[[example]]
name = "kratart-squashify"
path = "examples/squashify.rs"
[[example]]
name = "kratart-channel"
path = "examples/channel.rs"

View File

@ -1,29 +0,0 @@
use std::{env::args, path::PathBuf};
use anyhow::Result;
use env_logger::Env;
use krataoci::{cache::ImageCache, compiler::ImageCompiler, name::ImageName};
use tokio::fs;
#[tokio::main]
async fn main() -> Result<()> {
env_logger::Builder::from_env(Env::default().default_filter_or("info")).init();
let image = ImageName::parse(&args().nth(1).unwrap())?;
let seed = args().nth(2).map(PathBuf::from);
let cache_dir = PathBuf::from("krata-cache");
if !cache_dir.exists() {
fs::create_dir(&cache_dir).await?;
}
let cache = ImageCache::new(&cache_dir)?;
let compiler = ImageCompiler::new(&cache, seed)?;
let info = compiler.compile(&image).await?;
println!(
"generated squashfs of {} to {}",
image,
info.image_squashfs.to_string_lossy()
);
Ok(())
}

View File

@ -1,7 +1,7 @@
use anyhow::Result;
use backhand::{FilesystemWriter, NodeHeader};
use krata::launchcfg::LaunchInfo;
use krataoci::compiler::ImageInfo;
use krataoci::packer::OciImagePacked;
use log::trace;
use std::fs;
use std::fs::File;
@ -9,28 +9,24 @@ use std::path::PathBuf;
use uuid::Uuid;
pub struct ConfigBlock<'a> {
pub image_info: &'a ImageInfo,
pub image: &'a OciImagePacked,
pub file: PathBuf,
pub dir: PathBuf,
}
impl ConfigBlock<'_> {
pub fn new<'a>(uuid: &Uuid, image_info: &'a ImageInfo) -> Result<ConfigBlock<'a>> {
pub fn new<'a>(uuid: &Uuid, image: &'a OciImagePacked) -> Result<ConfigBlock<'a>> {
let mut dir = std::env::temp_dir().clone();
dir.push(format!("krata-cfg-{}", uuid));
fs::create_dir_all(&dir)?;
let mut file = dir.clone();
file.push("config.squashfs");
Ok(ConfigBlock {
image_info,
file,
dir,
})
Ok(ConfigBlock { image, file, dir })
}
pub fn build(&self, launch_config: &LaunchInfo) -> Result<()> {
trace!("build launch_config={:?}", launch_config);
let manifest = self.image_info.config.to_string()?;
let manifest = self.image.config.to_string()?;
let launch = serde_json::to_string(launch_config)?;
let mut writer = FilesystemWriter::default();
writer.push_dir(

View File

@ -48,7 +48,7 @@ pub struct ChannelService {
gnttab: GrantTab,
input_receiver: Receiver<(u32, Vec<u8>)>,
pub input_sender: Sender<(u32, Vec<u8>)>,
output_sender: Sender<(u32, Vec<u8>)>,
output_sender: Sender<(u32, Option<Vec<u8>>)>,
}
impl ChannelService {
@ -58,7 +58,7 @@ impl ChannelService {
) -> Result<(
ChannelService,
Sender<(u32, Vec<u8>)>,
Receiver<(u32, Vec<u8>)>,
Receiver<(u32, Option<Vec<u8>>)>,
)> {
let (input_sender, input_receiver) = channel(GROUPED_CHANNEL_QUEUE_LEN);
let (output_sender, output_receiver) = channel(GROUPED_CHANNEL_QUEUE_LEN);
@ -203,12 +203,14 @@ pub struct ChannelBackend {
pub domid: u32,
pub id: u32,
pub sender: Sender<Vec<u8>>,
raw_sender: Sender<(u32, Option<Vec<u8>>)>,
task: JoinHandle<()>,
}
impl Drop for ChannelBackend {
fn drop(&mut self) {
self.task.abort();
let _ = self.raw_sender.try_send((self.domid, None));
debug!(
"destroyed channel backend for domain {} channel {}",
self.domid, self.id
@ -226,7 +228,7 @@ impl ChannelBackend {
store: XsdClient,
evtchn: EventChannel,
gnttab: GrantTab,
output_sender: Sender<(u32, Vec<u8>)>,
output_sender: Sender<(u32, Option<Vec<u8>>)>,
use_reserved_ref: Option<u64>,
) -> Result<ChannelBackend> {
let processor = KrataChannelBackendProcessor {
@ -242,11 +244,14 @@ impl ChannelBackend {
let (input_sender, input_receiver) = channel(SINGLE_CHANNEL_QUEUE_LEN);
let task = processor.launch(output_sender, input_receiver).await?;
let task = processor
.launch(output_sender.clone(), input_receiver)
.await?;
Ok(ChannelBackend {
domid,
id,
task,
raw_sender: output_sender,
sender: input_sender,
})
}
@ -304,7 +309,7 @@ impl KrataChannelBackendProcessor {
async fn launch(
&self,
output_sender: Sender<(u32, Vec<u8>)>,
output_sender: Sender<(u32, Option<Vec<u8>>)>,
input_receiver: Receiver<Vec<u8>>,
) -> Result<JoinHandle<()>> {
let owned = self.clone();
@ -321,7 +326,7 @@ impl KrataChannelBackendProcessor {
async fn processor(
&self,
sender: Sender<(u32, Vec<u8>)>,
sender: Sender<(u32, Option<Vec<u8>>)>,
mut receiver: Receiver<Vec<u8>>,
) -> Result<()> {
self.init().await?;
@ -396,7 +401,7 @@ impl KrataChannelBackendProcessor {
unsafe {
let buffer = self.read_output_buffer(channel.local_port, &memory).await?;
if !buffer.is_empty() {
sender.send((self.domid, buffer)).await?;
sender.send((self.domid, Some(buffer))).await?;
}
};
@ -443,6 +448,10 @@ impl KrataChannelBackendProcessor {
error!("channel for domid {} has an invalid input space of {}", self.domid, space);
}
let free = XenConsoleInterface::INPUT_SIZE.wrapping_sub(space);
if free == 0 {
sleep(Duration::from_micros(100)).await;
continue;
}
let want = data.len().min(free);
let buffer = &data[index..want];
for b in buffer {
@ -466,7 +475,7 @@ impl KrataChannelBackendProcessor {
unsafe {
let buffer = self.read_output_buffer(channel.local_port, &memory).await?;
if !buffer.is_empty() {
sender.send((self.domid, buffer)).await?;
sender.send((self.domid, Some(buffer))).await?;
}
};
channel.unmask_sender.send(channel.local_port).await?;

View File

@ -1,18 +0,0 @@
use anyhow::Result;
use tokio::fs::File;
pub struct XenConsole {
pub read_handle: File,
pub write_handle: File,
}
impl XenConsole {
pub async fn new(tty: &str) -> Result<XenConsole> {
let read_handle = File::options().read(true).write(false).open(tty).await?;
let write_handle = File::options().read(false).write(true).open(tty).await?;
Ok(XenConsole {
read_handle,
write_handle,
})
}
}

View File

@ -8,7 +8,9 @@ use anyhow::{anyhow, Result};
use ipnetwork::{IpNetwork, Ipv4Network};
use krata::launchcfg::{
LaunchInfo, LaunchNetwork, LaunchNetworkIpv4, LaunchNetworkIpv6, LaunchNetworkResolver,
LaunchPackedFormat, LaunchRoot,
};
use krataoci::packer::OciImagePacked;
use tokio::sync::Semaphore;
use uuid::Uuid;
use xenclient::{DomainChannel, DomainConfig, DomainDisk, DomainNetworkInterface};
@ -16,23 +18,19 @@ use xenstore::XsdInterface;
use crate::cfgblk::ConfigBlock;
use crate::RuntimeContext;
use krataoci::{
cache::ImageCache,
compiler::{ImageCompiler, ImageInfo},
name::ImageName,
};
use super::{GuestInfo, GuestState};
pub struct GuestLaunchRequest<'a> {
pub struct GuestLaunchRequest {
pub format: LaunchPackedFormat,
pub uuid: Option<Uuid>,
pub name: Option<&'a str>,
pub image: &'a str,
pub name: Option<String>,
pub vcpus: u32,
pub mem: u64,
pub env: HashMap<String, String>,
pub run: Option<Vec<String>>,
pub debug: bool,
pub image: OciImagePacked,
}
pub struct GuestLauncher {
@ -44,15 +42,13 @@ impl GuestLauncher {
Ok(Self { launch_semaphore })
}
pub async fn launch<'r>(
pub async fn launch(
&mut self,
context: &RuntimeContext,
request: GuestLaunchRequest<'r>,
request: GuestLaunchRequest,
) -> Result<GuestInfo> {
let uuid = request.uuid.unwrap_or_else(Uuid::new_v4);
let xen_name = format!("krata-{uuid}");
let image_info = self.compile(request.image, &context.image_cache).await?;
let mut gateway_mac = MacAddr6::random();
gateway_mac.set_local(true);
gateway_mac.set_multicast(false);
@ -69,9 +65,13 @@ impl GuestLauncher {
let ipv6_network_mask: u32 = 10;
let launch_config = LaunchInfo {
root: LaunchRoot {
format: request.format.clone(),
},
hostname: Some(
request
.name
.as_ref()
.map(|x| x.to_string())
.unwrap_or_else(|| format!("krata-{}", uuid)),
),
@ -98,13 +98,14 @@ impl GuestLauncher {
run: request.run,
};
let cfgblk = ConfigBlock::new(&uuid, &image_info)?;
let cfgblk = ConfigBlock::new(&uuid, &request.image)?;
cfgblk.build(&launch_config)?;
let image_squashfs_path = image_info
.image_squashfs
let image_squashfs_path = request
.image
.path
.to_str()
.ok_or_else(|| anyhow!("failed to convert image squashfs path to string"))?;
.ok_or_else(|| anyhow!("failed to convert image path to string"))?;
let cfgblk_dir_path = cfgblk
.dir
@ -140,7 +141,6 @@ impl GuestLauncher {
cfgblk_dir_path,
),
),
("krata/image".to_string(), request.image.to_string()),
(
"krata/network/guest/ipv4".to_string(),
format!("{}/{}", guest_ipv4, ipv4_network_mask),
@ -167,8 +167,8 @@ impl GuestLauncher {
),
];
if let Some(name) = request.name {
extra_keys.push(("krata/name".to_string(), name.to_string()));
if let Some(name) = request.name.as_ref() {
extra_keys.push(("krata/name".to_string(), name.clone()));
}
let config = DomainConfig {
@ -209,10 +209,10 @@ impl GuestLauncher {
};
match context.xen.create(&config).await {
Ok(created) => Ok(GuestInfo {
name: request.name.map(|x| x.to_string()),
name: request.name.as_ref().map(|x| x.to_string()),
uuid,
domid: created.domid,
image: request.image.to_string(),
image: request.image.digest,
loops: vec![],
guest_ipv4: Some(IpNetwork::new(
IpAddr::V4(guest_ipv4),
@ -243,12 +243,6 @@ impl GuestLauncher {
}
}
async fn compile(&self, image: &str, image_cache: &ImageCache) -> Result<ImageInfo> {
let image = ImageName::parse(image)?;
let compiler = ImageCompiler::new(image_cache, None)?;
compiler.compile(&image).await
}
async fn allocate_ipv4(&self, context: &RuntimeContext) -> Result<Ipv4Addr> {
let network = Ipv4Network::new(Ipv4Addr::new(10, 75, 80, 0), 24)?;
let mut used: Vec<Ipv4Addr> = vec![];

View File

@ -15,15 +15,12 @@ use xenstore::{XsdClient, XsdInterface};
use self::{
autoloop::AutoLoop,
console::XenConsole,
launch::{GuestLaunchRequest, GuestLauncher},
};
use krataoci::cache::ImageCache;
pub mod autoloop;
pub mod cfgblk;
pub mod channel;
pub mod console;
pub mod launch;
pub struct GuestLoopInfo {
@ -53,7 +50,6 @@ pub struct GuestInfo {
#[derive(Clone)]
pub struct RuntimeContext {
pub image_cache: ImageCache,
pub autoloop: AutoLoop,
pub xen: XenClient,
pub kernel: String,
@ -69,12 +65,10 @@ impl RuntimeContext {
let xen = XenClient::open(0).await?;
image_cache_path.push("image");
fs::create_dir_all(&image_cache_path)?;
let image_cache = ImageCache::new(&image_cache_path)?;
let kernel = RuntimeContext::detect_guest_file(&store, "kernel")?;
let initrd = RuntimeContext::detect_guest_file(&store, "initrd")?;
Ok(RuntimeContext {
image_cache,
autoloop: AutoLoop::new(LoopControl::open()?),
xen,
kernel,
@ -269,7 +263,7 @@ impl Runtime {
})
}
pub async fn launch<'a>(&self, request: GuestLaunchRequest<'a>) -> Result<GuestInfo> {
pub async fn launch(&self, request: GuestLaunchRequest) -> Result<GuestInfo> {
let mut launcher = GuestLauncher::new(self.launch_semaphore.clone())?;
launcher.launch(&self.context, request).await
}
@ -321,17 +315,6 @@ impl Runtime {
Ok(uuid)
}
pub async fn console(&self, uuid: Uuid) -> Result<XenConsole> {
let info = self
.context
.resolve(uuid)
.await?
.ok_or_else(|| anyhow!("unable to resolve guest: {}", uuid))?;
let domid = info.domid;
let tty = self.context.xen.get_console_path(domid).await?;
XenConsole::new(&tty).await
}
pub async fn list(&self) -> Result<Vec<GuestInfo>> {
self.context.list().await
}

View File

@ -14,8 +14,8 @@ elf = { workspace = true }
flate2 = { workspace = true }
libc = { workspace = true }
log = { workspace = true }
krata-xencall = { path = "../xencall", version = "^0.0.7" }
krata-xenstore = { path = "../xenstore", version = "^0.0.7" }
krata-xencall = { path = "../xencall", version = "^0.0.9" }
krata-xenstore = { path = "../xenstore", version = "^0.0.9" }
memchr = { workspace = true }
nix = { workspace = true }
slice-copy = { workspace = true }

View File

@ -5,11 +5,6 @@ TOOLS_DIR="$(dirname "${0}")"
RUST_TARGET="$("${TOOLS_DIR}/target.sh")"
CROSS_COMPILE="$("${TOOLS_DIR}/cross-compile.sh")"
if [ "${TARGET_LIBC}" = "musl" ] && [ -f "/etc/alpine-release" ]
then
export RUSTFLAGS="-Ctarget-feature=-crt-static"
fi
if [ -z "${CARGO}" ]
then
if [ "${CROSS_COMPILE}" = "1" ] && command -v cross > /dev/null

View File

@ -1,6 +1,17 @@
#!/bin/sh
set -e
if [ -z "${TARGET_LIBC}" ] && [ -e "/etc/alpine-release" ] && [ "${KRATA_TARGET_IGNORE_LIBC}" != "1" ]
then
TARGET_LIBC="musl"
TARGET_VENDOR="alpine"
fi
if [ -z "${TARGET_VENDOR}" ]
then
TARGET_VENDOR="unknown"
fi
if [ -z "${TARGET_LIBC}" ] || [ "${KRATA_TARGET_IGNORE_LIBC}" = "1" ]
then
TARGET_LIBC="gnu"
@ -46,20 +57,20 @@ elif [ "${TARGET_OS}" = "freebsd" ]
then
if [ -z "${RUST_TARGET}" ]
then
[ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-unknown-freebsd"
[ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-${TARGET_VENDOR}-freebsd"
fi
elif [ "${TARGET_OS}" = "netbsd" ]
then
if [ -z "${RUST_TARGET}" ]
then
[ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-unknown-netbsd"
[ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-${TARGET_VENDOR}-netbsd"
fi
else
if [ -z "${RUST_TARGET}" ]
then
[ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-unknown-linux-${TARGET_LIBC}"
[ "${TARGET_ARCH}" = "aarch64" ] && RUST_TARGET="aarch64-unknown-linux-${TARGET_LIBC}"
[ "${TARGET_ARCH}" = "riscv64gc" ] && RUST_TARGET="riscv64gc-unknown-linux-${TARGET_LIBC}"
[ "${TARGET_ARCH}" = "x86_64" ] && RUST_TARGET="x86_64-${TARGET_VENDOR}-linux-${TARGET_LIBC}"
[ "${TARGET_ARCH}" = "aarch64" ] && RUST_TARGET="aarch64-${TARGET_VENDOR}-linux-${TARGET_LIBC}"
[ "${TARGET_ARCH}" = "riscv64gc" ] && RUST_TARGET="riscv64gc-${TARGET_VENDOR}-linux-${TARGET_LIBC}"
fi
fi

View File

@ -43,7 +43,7 @@ do
asset "${SOURCE_FILE_PATH}" "target/assets/krata_${TAG_NAME}_${PLATFORM}.deb"
elif [ "${FORM}" = "alpine" ]
then
asset "${SOURCE_FILE_PATH}" "target/assets/krata_${TAG_NAME}_${PLATFORM}.deb"
asset "${SOURCE_FILE_PATH}" "target/assets/krata_${TAG_NAME}_${PLATFORM}.apk"
elif [ "${FORM}" = "bundle-systemd" ]
then
asset "${SOURCE_FILE_PATH}" "target/assets/krata-systemd_${TAG_NAME}_${PLATFORM}.tgz"

View File

@ -28,5 +28,5 @@ build_and_run() {
fi
RUST_TARGET="$(./hack/build/target.sh)"
./hack/build/cargo.sh build ${CARGO_BUILD_FLAGS} --bin "${EXE_TARGET}"
exec sudo RUST_LOG="${RUST_LOG}" "target/${RUST_TARGET}/debug/${EXE_TARGET}" "${@}"
exec sudo sh -c "RUST_LOG='${RUST_LOG}' 'target/${RUST_TARGET}/debug/${EXE_TARGET}' $*"
}

2
hack/dist/apk.sh vendored
View File

@ -19,6 +19,8 @@ fpm -s tar -t apk \
--license agpl3 \
--version "${KRATA_VERSION}" \
--architecture "${TARGET_ARCH}" \
--depends "squashfs-tools" \
--depends "erofs-utils" \
--description "Krata Hypervisor" \
--url "https://krata.dev" \
--maintainer "Edera Team <contact@edera.dev>" \

2
hack/dist/deb.sh vendored
View File

@ -20,6 +20,8 @@ fpm -s tar -t deb \
--version "${KRATA_VERSION}" \
--architecture "${TARGET_ARCH_DEBIAN}" \
--depends "xen-system-${TARGET_ARCH_DEBIAN}" \
--depends "squashfs-tools" \
--depends "erofs-utils" \
--description "Krata Hypervisor" \
--url "https://krata.dev" \
--maintainer "Edera Team <contact@edera.dev>" \

View File

@ -26,7 +26,7 @@ KERNEL_SRC="${KERNEL_DIR}/linux-${KERNEL_VERSION}-${TARGET_ARCH_STANDARD}"
if [ -z "${KRATA_KERNEL_BUILD_JOBS}" ]
then
KRATA_KERNEL_BUILD_JOBS="2"
KRATA_KERNEL_BUILD_JOBS="$(nproc)"
fi
if [ ! -f "${KERNEL_SRC}/Makefile" ]

View File

@ -1,6 +1,6 @@
#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 6.8.2 Kernel Configuration
# Linux/x86 6.7.2 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc (Debian 13.2.0-23) 13.2.0"
CONFIG_CC_IS_GCC=y
@ -15,7 +15,6 @@ CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y
CONFIG_GCC_ASM_GOTO_OUTPUT_WORKAROUND=y
CONFIG_TOOLS_SUPPORT_RELR=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y
@ -188,10 +187,8 @@ CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
CONFIG_CC_HAS_INT128=y
CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5"
CONFIG_GCC10_NO_ARRAY_BOUNDS=y
CONFIG_GCC11_NO_ARRAY_BOUNDS=y
CONFIG_CC_NO_ARRAY_BOUNDS=y
CONFIG_GCC_NO_STRINGOP_OVERFLOW=y
CONFIG_CC_NO_STRINGOP_OVERFLOW=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
@ -270,19 +267,19 @@ CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_MEMBARRIER=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_CACHESTAT_SYSCALL=y
# CONFIG_PC104 is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_SELFTEST is not set
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
CONFIG_KALLSYMS_BASE_RELATIVE=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
CONFIG_CACHESTAT_SYSCALL=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_HAVE_PERF_EVENTS=y
CONFIG_GUEST_PERF_EVENTS=y
# CONFIG_PC104 is not set
#
# Kernel Performance Events And Counters
@ -382,7 +379,6 @@ CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_HAVE_PAE=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
@ -460,6 +456,7 @@ CONFIG_X86_INTEL_TSX_MODE_OFF=y
# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
# CONFIG_X86_SGX is not set
# CONFIG_X86_USER_SHADOW_STACK is not set
# CONFIG_INTEL_TDX_HOST is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_HANDOVER_PROTOCOL=y
@ -522,7 +519,6 @@ CONFIG_CPU_IBRS_ENTRY=y
CONFIG_CPU_SRSO=y
# CONFIG_SLS is not set
# CONFIG_GDS_FORCE_MITIGATION is not set
CONFIG_MITIGATION_RFDS=y
CONFIG_ARCH_HAS_ADD_PAGES=y
#
@ -547,7 +543,6 @@ CONFIG_ACPI=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
CONFIG_ACPI_THERMAL_LIB=y
# CONFIG_ACPI_DEBUGGER is not set
CONFIG_ACPI_SPCR_TABLE=y
# CONFIG_ACPI_FPDT is not set
@ -674,13 +669,14 @@ CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
# end of Binary Emulations
CONFIG_HAVE_KVM=y
CONFIG_KVM_COMMON=y
CONFIG_HAVE_KVM_PFNCACHE=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQFD=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_DIRTY_RING=y
CONFIG_HAVE_KVM_DIRTY_RING_TSO=y
CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
@ -693,16 +689,13 @@ CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_KVM_XFER_TO_GUEST_WORK=y
CONFIG_HAVE_KVM_PM_NOTIFIER=y
CONFIG_KVM_GENERIC_HARDWARE_ENABLING=y
CONFIG_KVM_GENERIC_MMU_NOTIFIER=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_KVM_WERROR=y
# CONFIG_KVM_SW_PROTECTED_VM is not set
CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m
CONFIG_KVM_AMD_SEV=y
CONFIG_KVM_SMM=y
CONFIG_KVM_HYPERV=y
# CONFIG_KVM_XEN is not set
# CONFIG_KVM_PROVE_MMU is not set
CONFIG_KVM_MAX_NR_VCPUS=1024
@ -853,7 +846,6 @@ CONFIG_ARCH_SUPPORTS_PAGE_TABLE_CHECK=y
CONFIG_ARCH_HAS_ELFCORE_COMPAT=y
CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH=y
CONFIG_DYNAMIC_SIGFRAME=y
CONFIG_ARCH_HAS_HW_PTE_YOUNG=y
CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y
#
@ -910,7 +902,6 @@ CONFIG_BLK_ICQ=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_INTEGRITY_T10=m
CONFIG_BLK_DEV_WRITE_MOUNTED=y
# CONFIG_BLK_DEV_ZONED is not set
CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_DEV_THROTTLING_LOW is not set
@ -1000,7 +991,6 @@ CONFIG_SWAP=y
CONFIG_ZSWAP=y
# CONFIG_ZSWAP_DEFAULT_ON is not set
# CONFIG_ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON is not set
# CONFIG_ZSWAP_SHRINKER_DEFAULT_ON is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
@ -1019,8 +1009,9 @@ CONFIG_ZSMALLOC=y
CONFIG_ZSMALLOC_CHAIN_SIZE=8
#
# Slab allocator options
# SLAB allocator options
#
# CONFIG_SLAB_DEPRECATED is not set
CONFIG_SLUB=y
# CONFIG_SLUB_TINY is not set
CONFIG_SLAB_MERGE_DEFAULT=y
@ -1029,7 +1020,7 @@ CONFIG_SLAB_FREELIST_RANDOM=y
# CONFIG_SLUB_STATS is not set
CONFIG_SLUB_CPU_PARTIAL=y
# CONFIG_RANDOM_KMALLOC_CACHES is not set
# end of Slab allocator options
# end of SLAB allocator options
# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set
# CONFIG_COMPAT_BRK is not set
@ -1071,7 +1062,6 @@ CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
# CONFIG_TRANSPARENT_HUGEPAGE_NEVER is not set
CONFIG_THP_SWAP=y
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
@ -1102,7 +1092,6 @@ CONFIG_SECRETMEM=y
CONFIG_LRU_GEN=y
CONFIG_LRU_GEN_ENABLED=y
# CONFIG_LRU_GEN_STATS is not set
CONFIG_LRU_GEN_WALKS_MMU=y
CONFIG_ARCH_SUPPORTS_PER_VMA_LOCK=y
CONFIG_PER_VMA_LOCK=y
CONFIG_LOCK_MM_AND_FIND_VMA=y
@ -1600,6 +1589,7 @@ CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_NFLOG=m
# CONFIG_BPFILTER is not set
CONFIG_IP_DCCP=m
CONFIG_INET_DCCP_DIAG=m
@ -1935,7 +1925,6 @@ CONFIG_ALLOW_DEV_COREDUMP=y
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_SYS_HYPERVISOR=y
CONFIG_GENERIC_CPU_DEVICES=y
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y
@ -2040,7 +2029,6 @@ CONFIG_ZRAM_DEF_COMP_LZ4=y
# CONFIG_ZRAM_DEF_COMP_LZ4HC is not set
CONFIG_ZRAM_DEF_COMP="lz4"
# CONFIG_ZRAM_WRITEBACK is not set
# CONFIG_ZRAM_TRACK_ENTRY_ACTIME is not set
# CONFIG_ZRAM_MEMORY_TRACKING is not set
# CONFIG_ZRAM_MULTI_COMP is not set
CONFIG_BLK_DEV_LOOP=m
@ -2101,7 +2089,6 @@ CONFIG_VMWARE_BALLOON=m
# CONFIG_DW_XDATA_PCIE is not set
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
# CONFIG_NSM is not set
# CONFIG_C2PORT is not set
#
@ -2128,6 +2115,8 @@ CONFIG_VMWARE_BALLOON=m
#
# CONFIG_ALTERA_STAPL is not set
# CONFIG_INTEL_MEI is not set
# CONFIG_INTEL_MEI_ME is not set
# CONFIG_INTEL_MEI_TXE is not set
CONFIG_VMWARE_VMCI=m
# CONFIG_GENWQE is not set
# CONFIG_ECHO is not set
@ -2341,10 +2330,13 @@ CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
# CONFIG_MD_AUTODETECT is not set
CONFIG_MD_BITMAP_FILE=y
# CONFIG_MD_LINEAR is not set
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
# CONFIG_MD_MULTIPATH is not set
# CONFIG_MD_FAULTY is not set
# CONFIG_MD_CLUSTER is not set
CONFIG_BCACHE=m
# CONFIG_BCACHE_DEBUG is not set
@ -2564,6 +2556,7 @@ CONFIG_NET_VENDOR_WANGXUN=y
# CONFIG_NET_VENDOR_WIZNET is not set
CONFIG_NET_VENDOR_XILINX=y
# CONFIG_XILINX_EMACLITE is not set
# CONFIG_XILINX_AXI_EMAC is not set
# CONFIG_XILINX_LL_TEMAC is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
@ -2622,7 +2615,6 @@ CONFIG_FIXED_PHY=m
# CONFIG_DP83867_PHY is not set
# CONFIG_DP83869_PHY is not set
# CONFIG_DP83TD510_PHY is not set
# CONFIG_DP83TG720_PHY is not set
# CONFIG_VITESSE_PHY is not set
# CONFIG_XILINX_GMII2RGMII is not set
# CONFIG_PSE_CONTROLLER is not set
@ -3081,13 +3073,13 @@ CONFIG_HWMON=m
# CONFIG_SENSORS_DRIVETEMP is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_DELL_SMM is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_FTSTEUTATES is not set
# CONFIG_SENSORS_GIGABYTE_WATERFORCE is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_G760A is not set
@ -3219,7 +3211,6 @@ CONFIG_SENSORS_ACPI_POWER=m
CONFIG_THERMAL=y
# CONFIG_THERMAL_NETLINK is not set
# CONFIG_THERMAL_STATISTICS is not set
# CONFIG_THERMAL_DEBUGFS is not set
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
# CONFIG_THERMAL_WRITABLE_TRIPS is not set
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
@ -3369,6 +3360,7 @@ CONFIG_BCMA_POSSIBLE=y
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SKY81452 is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_TI_LMU is not set
# CONFIG_TPS6105X is not set
@ -3436,7 +3428,6 @@ CONFIG_DRM_GEM_SHMEM_HELPER=m
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
# CONFIG_DRM_I915 is not set
# CONFIG_DRM_XE is not set
# CONFIG_DRM_VGEM is not set
# CONFIG_DRM_VKMS is not set
CONFIG_DRM_VMWGFX=m
@ -3464,6 +3455,7 @@ CONFIG_DRM_PANEL_BRIDGE=y
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# end of Display Interface Bridges
# CONFIG_DRM_LOONGSON is not set
# CONFIG_DRM_ETNAVIV is not set
CONFIG_DRM_BOCHS=m
CONFIG_DRM_CIRRUS_QEMU=m
@ -3475,6 +3467,7 @@ CONFIG_DRM_VBOXVIDEO=m
# CONFIG_DRM_GUD is not set
# CONFIG_DRM_SSD130X is not set
CONFIG_DRM_HYPERV=m
# CONFIG_DRM_LEGACY is not set
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=m
#
@ -3494,6 +3487,7 @@ CONFIG_FB=m
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
@ -3528,8 +3522,9 @@ CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYSMEM_FOPS=m
CONFIG_FB_SYS_FOPS=m
CONFIG_FB_DEFERRED_IO=y
CONFIG_FB_IOMEM_FOPS=m
CONFIG_FB_SYSMEM_HELPERS=y
CONFIG_FB_SYSMEM_HELPERS_DEFERRED=y
# CONFIG_FB_MODE_HELPERS is not set
@ -4132,7 +4127,6 @@ CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_ADMIN_LEGACY=y
CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_VDPA=m
# CONFIG_VIRTIO_PMEM is not set
@ -4312,7 +4306,6 @@ CONFIG_RPMSG_VIRTIO=m
#
# Qualcomm SoC drivers
#
# CONFIG_QCOM_PMIC_PDCHARGER_ULOG is not set
# end of Qualcomm SoC drivers
# CONFIG_SOC_TI is not set
@ -4387,7 +4380,6 @@ CONFIG_MEMORY=y
#
# Performance monitor support
#
# CONFIG_DWC_PCIE_PMU is not set
# end of Performance monitor support
# CONFIG_RAS is not set
@ -4408,7 +4400,14 @@ CONFIG_DAX=y
# CONFIG_DEV_DAX is not set
CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y
# CONFIG_NVMEM_LAYOUTS is not set
#
# Layout Types
#
# CONFIG_NVMEM_LAYOUT_SL28_VPD is not set
# CONFIG_NVMEM_LAYOUT_ONIE_TLV is not set
# end of Layout Types
# CONFIG_NVMEM_RMEM is not set
#
@ -4435,7 +4434,6 @@ CONFIG_NVMEM_SYSFS=y
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_VALIDATE_FS_PARSER is not set
CONFIG_FS_IOMAP=y
CONFIG_FS_STACK=y
CONFIG_BUFFER_HEAD=y
CONFIG_LEGACY_DIRECT_IO=y
CONFIG_EXT2_FS=m
@ -4527,7 +4525,7 @@ CONFIG_QFMT_V1=m
CONFIG_QFMT_V2=m
CONFIG_QUOTACTL=y
CONFIG_AUTOFS_FS=m
CONFIG_FUSE_FS=m
CONFIG_FUSE_FS=y
# CONFIG_CUSE is not set
CONFIG_VIRTIO_FS=m
CONFIG_OVERLAY_FS=y
@ -4598,9 +4596,9 @@ CONFIG_TMPFS_XATTR=y
CONFIG_TMPFS_INODE64=y
# CONFIG_TMPFS_QUOTA is not set
CONFIG_HUGETLBFS=y
# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set
CONFIG_HUGETLB_PAGE=y
CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y
# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_EFIVAR_FS=y
@ -4656,7 +4654,16 @@ CONFIG_SYSV_FS=m
CONFIG_UFS_FS=m
# CONFIG_UFS_FS_WRITE is not set
# CONFIG_UFS_DEBUG is not set
# CONFIG_EROFS_FS is not set
CONFIG_EROFS_FS=y
# CONFIG_EROFS_FS_DEBUG is not set
CONFIG_EROFS_FS_XATTR=y
CONFIG_EROFS_FS_POSIX_ACL=y
CONFIG_EROFS_FS_SECURITY=y
CONFIG_EROFS_FS_ZIP=y
CONFIG_EROFS_FS_ZIP_LZMA=y
CONFIG_EROFS_FS_ZIP_DEFLATE=y
CONFIG_EROFS_FS_PCPU_KTHREAD=y
CONFIG_EROFS_FS_PCPU_KTHREAD_HIPRI=y
CONFIG_VBOXSF_FS=m
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=m
@ -4688,7 +4695,6 @@ CONFIG_NFSD_SCSILAYOUT=y
CONFIG_NFSD_FLEXFILELAYOUT=y
# CONFIG_NFSD_V4_2_INTER_SSC is not set
# CONFIG_NFSD_V4_SECURITY_LABEL is not set
# CONFIG_NFSD_LEGACY_CLIENT_TRACKING is not set
CONFIG_GRACE_PERIOD=m
CONFIG_LOCKD=m
CONFIG_LOCKD_V4=y
@ -4948,12 +4954,14 @@ CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_ADIANTUM=m
CONFIG_CRYPTO_CHACHA20=m
CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CFB is not set
CONFIG_CRYPTO_CTR=m
CONFIG_CRYPTO_CTS=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_HCTR2=m
CONFIG_CRYPTO_KEYWRAP=m
CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_OFB is not set
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_XCTR=m
CONFIG_CRYPTO_XTS=y
@ -5002,7 +5010,7 @@ CONFIG_CRYPTO_XXHASH=m
#
# CRCs (cyclic redundancy checks)
#
CONFIG_CRYPTO_CRC32C=m
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_CRCT10DIF=y
CONFIG_CRYPTO_CRC64_ROCKSOFT=m
@ -5108,7 +5116,6 @@ CONFIG_CRYPTO_DEV_QAT=m
# CONFIG_CRYPTO_DEV_QAT_C3XXX is not set
# CONFIG_CRYPTO_DEV_QAT_C62X is not set
CONFIG_CRYPTO_DEV_QAT_4XXX=m
# CONFIG_CRYPTO_DEV_QAT_420XX is not set
CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set
# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set
@ -5196,7 +5203,7 @@ CONFIG_CRC32_SLICEBY8=y
CONFIG_CRC64=m
# CONFIG_CRC4 is not set
CONFIG_CRC7=m
CONFIG_LIBCRC32C=m
CONFIG_LIBCRC32C=y
CONFIG_CRC8=m
CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
@ -5216,7 +5223,7 @@ CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
# CONFIG_XZ_DEC_MICROLZMA is not set
CONFIG_XZ_DEC_MICROLZMA=y
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
@ -5312,7 +5319,7 @@ CONFIG_DEBUG_MISC=y
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
CONFIG_AS_HAS_NON_CONST_ULEB128=y
CONFIG_AS_HAS_NON_CONST_LEB128=y
# CONFIG_DEBUG_INFO_NONE is not set
# CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set
# CONFIG_DEBUG_INFO_DWARF4 is not set