Compare commits

..

14 Commits

Author SHA1 Message Date
3cb0e214e9 chore: release (#362)
Co-authored-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
2024-08-29 01:50:12 +00:00
0e0c5264eb build(deps): bump the dep-updates group with 2 updates (#367)
Bumps the dep-updates group with 2 updates: [tonic-build](https://github.com/hyperium/tonic) and [tonic](https://github.com/hyperium/tonic).


Updates `tonic-build` from 0.12.1 to 0.12.2
- [Release notes](https://github.com/hyperium/tonic/releases)
- [Changelog](https://github.com/hyperium/tonic/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/tonic/compare/v0.12.1...v0.12.2)

Updates `tonic` from 0.12.1 to 0.12.2
- [Release notes](https://github.com/hyperium/tonic/releases)
- [Changelog](https://github.com/hyperium/tonic/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/tonic/compare/v0.12.1...v0.12.2)

---
updated-dependencies:
- dependency-name: tonic-build
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: tonic
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-27 19:23:26 +00:00
19f35ef20a feature(krata): implement network reservation list (#366) 2024-08-26 19:05:57 +00:00
79e27256e6 build(deps): bump the dep-updates group with 6 updates (#365)
Bumps the dep-updates group with 6 updates:

| Package | From | To |
| --- | --- | --- |
| [flate2](https://github.com/rust-lang/flate2-rs) | `1.0.32` | `1.0.33` |
| [ratatui](https://github.com/ratatui/ratatui) | `0.28.0` | `0.28.1` |
| [redb](https://github.com/cberner/redb) | `2.1.1` | `2.1.2` |
| [serde_json](https://github.com/serde-rs/json) | `1.0.125` | `1.0.127` |
| [sysinfo](https://github.com/GuillaumeGomez/sysinfo) | `0.31.2` | `0.31.3` |
| [serde](https://github.com/serde-rs/serde) | `1.0.208` | `1.0.209` |


Updates `flate2` from 1.0.32 to 1.0.33
- [Release notes](https://github.com/rust-lang/flate2-rs/releases)
- [Changelog](https://github.com/rust-lang/flate2-rs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/flate2-rs/compare/1.0.32...1.0.33)

Updates `ratatui` from 0.28.0 to 0.28.1
- [Release notes](https://github.com/ratatui/ratatui/releases)
- [Changelog](https://github.com/ratatui/ratatui/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ratatui/ratatui/compare/v0.28.0...v0.28.1)

Updates `redb` from 2.1.1 to 2.1.2
- [Release notes](https://github.com/cberner/redb/releases)
- [Changelog](https://github.com/cberner/redb/blob/master/CHANGELOG.md)
- [Commits](https://github.com/cberner/redb/compare/v2.1.1...v2.1.2)

Updates `serde_json` from 1.0.125 to 1.0.127
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/1.0.125...1.0.127)

Updates `sysinfo` from 0.31.2 to 0.31.3
- [Changelog](https://github.com/GuillaumeGomez/sysinfo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/GuillaumeGomez/sysinfo/compare/v0.31.2...v0.31.3)

Updates `serde` from 1.0.208 to 1.0.209
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.208...v1.0.209)

---
updated-dependencies:
- dependency-name: flate2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: ratatui
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: redb
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: sysinfo
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: dep-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Zenla <alex@edera.dev>
2024-08-26 08:27:30 +00:00
b6c726e7aa feature(kernel): switch to linux-kernel-oci images (#364) 2024-08-26 06:18:56 +00:00
0d2b7a3ae3 feature(zone-exec): implement terminal resize support (#363) 2024-08-26 04:43:07 +00:00
f1e3d59b6a chore: release (#354)
Co-authored-by: edera-cultivation[bot] <165992271+edera-cultivation[bot]@users.noreply.github.com>
2024-08-26 01:38:30 +00:00
0106b85de9 fix(zone-exec): catch panic errors and show all errors immediately (#359) 2024-08-25 07:16:20 +00:00
96ccbd50bb fix(zone-exec): ensure that the underlying process is killed when rpc is closed (#361) 2024-08-25 07:07:37 +00:00
41aa1aa707 fix(rpc): rename HostStatus to GetHostStatus (#360) 2024-08-25 06:24:46 +00:00
ec74bc8d2b fix(console): don't replay history when attaching to the console (#358) 2024-08-25 03:49:33 +00:00
694de5d1fd chore(control): split out all of the rpc calls into their own files (#357) 2024-08-25 03:03:20 +00:00
f2db826ba6 feature(config): write default config to config.toml on startup (#356) 2024-08-25 00:48:38 +00:00
7f5609a846 feature(ctl): add --format option to host status and improve cpu topology format (#355) 2024-08-23 19:26:23 +00:00
62 changed files with 2018 additions and 1017 deletions

View File

@ -6,6 +6,30 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
## [0.0.20](https://github.com/edera-dev/krata/compare/v0.0.19...v0.0.20) - 2024-08-27
### Added
- *(krata)* implement network reservation list ([#366](https://github.com/edera-dev/krata/pull/366))
- *(zone-exec)* implement terminal resize support ([#363](https://github.com/edera-dev/krata/pull/363))
### Other
- update Cargo.toml dependencies
## [0.0.19](https://github.com/edera-dev/krata/compare/v0.0.18...v0.0.19) - 2024-08-25
### Added
- *(config)* write default config to config.toml on startup ([#356](https://github.com/edera-dev/krata/pull/356))
- *(ctl)* add --format option to host status and improve cpu topology format ([#355](https://github.com/edera-dev/krata/pull/355))
### Fixed
- *(zone-exec)* ensure that the underlying process is killed when rpc is closed ([#361](https://github.com/edera-dev/krata/pull/361))
- *(rpc)* rename HostStatus to GetHostStatus ([#360](https://github.com/edera-dev/krata/pull/360))
- *(console)* don't replay history when attaching to the console ([#358](https://github.com/edera-dev/krata/pull/358))
- *(zone-exec)* catch panic errors and show all errors immediately ([#359](https://github.com/edera-dev/krata/pull/359))
### Other
- *(control)* split out all of the rpc calls into their own files ([#357](https://github.com/edera-dev/krata/pull/357))
## [0.0.18](https://github.com/edera-dev/krata/compare/v0.0.17...v0.0.18) - 2024-08-22 ## [0.0.18](https://github.com/edera-dev/krata/compare/v0.0.17...v0.0.18) - 2024-08-22
### Added ### Added

67
Cargo.lock generated
View File

@ -814,9 +814,9 @@ checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80"
[[package]] [[package]]
name = "flate2" name = "flate2"
version = "1.0.32" version = "1.0.33"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9c0596c1eac1f9e04ed902702e9878208b336edc9d6fddc8a48387349bab3666" checksum = "324a1be68054ef05ad64b861cc9eaf1d623d2d8cb25b4bf2cb9cdd902b4bf253"
dependencies = [ dependencies = [
"crc32fast", "crc32fast",
"miniz_oxide 0.8.0", "miniz_oxide 0.8.0",
@ -1297,7 +1297,7 @@ dependencies = [
[[package]] [[package]]
name = "krata" name = "krata"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@ -1337,7 +1337,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-buildtools" name = "krata-buildtools"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"env_logger", "env_logger",
@ -1352,7 +1352,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-ctl" name = "krata-ctl"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-stream", "async-stream",
@ -1382,7 +1382,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-daemon" name = "krata-daemon"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-stream", "async-stream",
@ -1414,14 +1414,14 @@ dependencies = [
[[package]] [[package]]
name = "krata-loopdev" name = "krata-loopdev"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"libc", "libc",
] ]
[[package]] [[package]]
name = "krata-network" name = "krata-network"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@ -1445,7 +1445,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-oci" name = "krata-oci"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-compression", "async-compression",
@ -1472,7 +1472,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-runtime" name = "krata-runtime"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"backhand", "backhand",
@ -1513,7 +1513,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xencall" name = "krata-xencall"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"env_logger", "env_logger",
"libc", "libc",
@ -1526,7 +1526,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xenclient" name = "krata-xenclient"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"env_logger", "env_logger",
@ -1544,7 +1544,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xenevtchn" name = "krata-xenevtchn"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"byteorder", "byteorder",
"libc", "libc",
@ -1556,7 +1556,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xengnt" name = "krata-xengnt"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"libc", "libc",
"nix 0.29.0", "nix 0.29.0",
@ -1565,7 +1565,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xenplatform" name = "krata-xenplatform"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"c2rust-bitfields", "c2rust-bitfields",
@ -1588,7 +1588,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-xenstore" name = "krata-xenstore"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"byteorder", "byteorder",
"env_logger", "env_logger",
@ -1600,7 +1600,7 @@ dependencies = [
[[package]] [[package]]
name = "krata-zone" name = "krata-zone"
version = "0.0.18" version = "0.0.20"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"cgroups-rs", "cgroups-rs",
@ -1622,6 +1622,7 @@ dependencies = [
"sys-mount", "sys-mount",
"sysinfo", "sysinfo",
"tokio", "tokio",
"tokio-util",
] ]
[[package]] [[package]]
@ -2303,9 +2304,9 @@ dependencies = [
[[package]] [[package]]
name = "ratatui" name = "ratatui"
version = "0.28.0" version = "0.28.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5ba6a365afbe5615999275bea2446b970b10a41102500e27ce7678d50d978303" checksum = "fdef7f9be5c0122f890d58bdf4d964349ba6a6161f705907526d891efabba57d"
dependencies = [ dependencies = [
"bitflags 2.6.0", "bitflags 2.6.0",
"cassowary", "cassowary",
@ -2344,9 +2345,9 @@ dependencies = [
[[package]] [[package]]
name = "redb" name = "redb"
version = "2.1.1" version = "2.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6dd20d3cdeb9c7d2366a0b16b93b35b75aec15309fbeb7ce477138c9f68c8c0" checksum = "58323dc32ea52a8ae105ff94bc0460c5d906307533ba3401aa63db3cbe491fe5"
dependencies = [ dependencies = [
"libc", "libc",
] ]
@ -2576,9 +2577,9 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.208" version = "1.0.209"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cff085d2cb684faa248efb494c39b68e522822ac0de72ccf08109abde717cfb2" checksum = "99fce0ffe7310761ca6bf9faf5115afbc19688edd00171d81b1bb1b116c63e09"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
@ -2595,9 +2596,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.208" version = "1.0.209"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24008e81ff7613ed8e5ba0cfaf24e2c2f1e5b8a0495711e44fcd4882fca62bcf" checksum = "a5831b979fd7b5439637af1752d535ff49f4860c0f341d1baeb6faf0f4242170"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -2606,9 +2607,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.125" version = "1.0.127"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83c8e735a073ccf5be70aa8066aa984eaf2fa000db6c8d0100ae605b366d31ed" checksum = "8043c06d9f82bd7271361ed64f415fe5e12a77fdb52e573e7f06a516dea329ad"
dependencies = [ dependencies = [
"itoa", "itoa",
"memchr", "memchr",
@ -2866,9 +2867,9 @@ dependencies = [
[[package]] [[package]]
name = "sysinfo" name = "sysinfo"
version = "0.31.2" version = "0.31.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4115055da5f572fff541dd0c4e61b0262977f453cc9fe04be83aba25a89bdab" checksum = "2b92e0bdf838cbc1c4c9ba14f9c97a7ec6cdcd1ae66b10e1e42775a25553f45d"
dependencies = [ dependencies = [
"core-foundation-sys", "core-foundation-sys",
"libc", "libc",
@ -3061,9 +3062,9 @@ dependencies = [
[[package]] [[package]]
name = "tonic" name = "tonic"
version = "0.12.1" version = "0.12.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38659f4a91aba8598d27821589f5db7dddd94601e7a01b1e485a50e5484c7401" checksum = "c6f6ba989e4b2c58ae83d862d3a3e27690b6e3ae630d0deb59f3697f32aa88ad"
dependencies = [ dependencies = [
"async-stream", "async-stream",
"async-trait", "async-trait",
@ -3093,9 +3094,9 @@ dependencies = [
[[package]] [[package]]
name = "tonic-build" name = "tonic-build"
version = "0.12.1" version = "0.12.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "568392c5a2bd0020723e3f387891176aabafe36fd9fcd074ad309dfa0c8eb964" checksum = "fe4ee8877250136bd7e3d2331632810a4df4ea5e004656990d8d66d2f5ee8a67"
dependencies = [ dependencies = [
"prettyplease", "prettyplease",
"proc-macro2", "proc-macro2",

View File

@ -18,7 +18,7 @@ members = [
resolver = "2" resolver = "2"
[workspace.package] [workspace.package]
version = "0.0.18" version = "0.0.20"
homepage = "https://krata.dev" homepage = "https://krata.dev"
license = "Apache-2.0" license = "Apache-2.0"
repository = "https://github.com/edera-dev/krata" repository = "https://github.com/edera-dev/krata"
@ -70,23 +70,24 @@ prost-reflect-build = "0.14.0"
prost-types = "0.13.1" prost-types = "0.13.1"
pty-process = "0.4.0" pty-process = "0.4.0"
rand = "0.8.5" rand = "0.8.5"
ratatui = "0.28.0" ratatui = "0.28.1"
redb = "2.1.1" redb = "2.1.2"
regex = "1.10.6" regex = "1.10.6"
rtnetlink = "0.14.1" rtnetlink = "0.14.1"
scopeguard = "1.2.0" scopeguard = "1.2.0"
serde_json = "1.0.125" serde_json = "1.0.127"
serde_yaml = "0.9" serde_yaml = "0.9"
sha256 = "1.5.0" sha256 = "1.5.0"
signal-hook = "0.3.17" signal-hook = "0.3.17"
slice-copy = "0.3.0" slice-copy = "0.3.0"
smoltcp = "0.11.0" smoltcp = "0.11.0"
sysinfo = "0.31.2" sysinfo = "0.31.3"
termtree = "0.5.1" termtree = "0.5.1"
thiserror = "1.0" thiserror = "1.0"
tokio-tun = "0.11.5" tokio-tun = "0.11.5"
tokio-util = "0.7.11"
toml = "0.8.19" toml = "0.8.19"
tonic-build = "0.12.1" tonic-build = "0.12.2"
tower = "0.5.0" tower = "0.5.0"
udp-stream = "0.0.12" udp-stream = "0.0.12"
url = "2.5.2" url = "2.5.2"
@ -107,7 +108,7 @@ default-features = false
features = ["rustls-tls"] features = ["rustls-tls"]
[workspace.dependencies.serde] [workspace.dependencies.serde]
version = "1.0.208" version = "1.0.209"
features = ["derive"] features = ["derive"]
[workspace.dependencies.sys-mount] [workspace.dependencies.sys-mount]
@ -123,7 +124,7 @@ version = "0.1"
features = ["io-util", "net"] features = ["io-util", "net"]
[workspace.dependencies.tonic] [workspace.dependencies.tonic]
version = "0.12.1" version = "0.12.2"
features = ["tls"] features = ["tls"]
[workspace.dependencies.uuid] [workspace.dependencies.uuid]

View File

@ -16,7 +16,7 @@ oci-spec = { workspace = true }
scopeguard = { workspace = true } scopeguard = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
tokio-stream = { workspace = true } tokio-stream = { workspace = true }
krata-oci = { path = "../oci", version = "^0.0.18" } krata-oci = { path = "../oci", version = "^0.0.20" }
krata-tokio-tar = { workspace = true } krata-tokio-tar = { workspace = true }
uuid = { workspace = true } uuid = { workspace = true }

View File

@ -20,7 +20,7 @@ env_logger = { workspace = true }
fancy-duration = { workspace = true } fancy-duration = { workspace = true }
human_bytes = { workspace = true } human_bytes = { workspace = true }
indicatif = { workspace = true } indicatif = { workspace = true }
krata = { path = "../krata", version = "^0.0.18" } krata = { path = "../krata", version = "^0.0.20" }
log = { workspace = true } log = { workspace = true }
prost-reflect = { workspace = true, features = ["serde"] } prost-reflect = { workspace = true, features = ["serde"] }
prost-types = { workspace = true } prost-types = { workspace = true }

View File

@ -23,7 +23,7 @@ enum DeviceListFormat {
} }
#[derive(Parser)] #[derive(Parser)]
#[command(about = "List the devices on the isolation engine")] #[command(about = "List device information")]
pub struct DeviceListCommand { pub struct DeviceListCommand {
#[arg(short, long, default_value = "table", help = "Output format")] #[arg(short, long, default_value = "table", help = "Output format")]
format: DeviceListFormat, format: DeviceListFormat,

View File

@ -5,7 +5,9 @@ use comfy_table::{Cell, Table};
use krata::v1::control::{ use krata::v1::control::{
control_service_client::ControlServiceClient, GetHostCpuTopologyRequest, HostCpuTopologyClass, control_service_client::ControlServiceClient, GetHostCpuTopologyRequest, HostCpuTopologyClass,
}; };
use serde_json::Value;
use crate::format::{kv2line, proto2dynamic, proto2kv};
use tonic::{transport::Channel, Request}; use tonic::{transport::Channel, Request};
fn class_to_str(input: HostCpuTopologyClass) -> String { fn class_to_str(input: HostCpuTopologyClass) -> String {
@ -19,6 +21,11 @@ fn class_to_str(input: HostCpuTopologyClass) -> String {
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)] #[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum HostCpuTopologyFormat { enum HostCpuTopologyFormat {
Table, Table,
Json,
JsonPretty,
Jsonl,
Yaml,
KeyValue,
} }
#[derive(Parser)] #[derive(Parser)]
@ -35,24 +42,61 @@ impl HostCpuTopologyCommand {
.await? .await?
.into_inner(); .into_inner();
let mut table = Table::new(); match self.format {
table.load_preset(UTF8_FULL_CONDENSED); HostCpuTopologyFormat::Table => {
table.set_content_arrangement(comfy_table::ContentArrangement::Dynamic); let mut table = Table::new();
table.set_header(vec!["id", "node", "socket", "core", "thread", "class"]); table.load_preset(UTF8_FULL_CONDENSED);
table.set_content_arrangement(comfy_table::ContentArrangement::Dynamic);
table.set_header(vec!["id", "node", "socket", "core", "thread", "class"]);
for (i, cpu) in response.cpus.iter().enumerate() { for (i, cpu) in response.cpus.iter().enumerate() {
table.add_row(vec![ table.add_row(vec![
Cell::new(i), Cell::new(i),
Cell::new(cpu.node), Cell::new(cpu.node),
Cell::new(cpu.socket), Cell::new(cpu.socket),
Cell::new(cpu.core), Cell::new(cpu.core),
Cell::new(cpu.thread), Cell::new(cpu.thread),
Cell::new(class_to_str(cpu.class())), Cell::new(class_to_str(cpu.class())),
]); ]);
} }
if !table.is_empty() { if !table.is_empty() {
println!("{}", table); println!("{}", table);
}
}
HostCpuTopologyFormat::Json
| HostCpuTopologyFormat::JsonPretty
| HostCpuTopologyFormat::Yaml => {
let mut values = Vec::new();
for cpu in response.cpus {
let message = proto2dynamic(cpu)?;
values.push(serde_json::to_value(message)?);
}
let value = Value::Array(values);
let encoded = if self.format == HostCpuTopologyFormat::JsonPretty {
serde_json::to_string_pretty(&value)?
} else if self.format == HostCpuTopologyFormat::Yaml {
serde_yaml::to_string(&value)?
} else {
serde_json::to_string(&value)?
};
println!("{}", encoded.trim());
}
HostCpuTopologyFormat::Jsonl => {
for cpu in response.cpus {
let message = proto2dynamic(cpu)?;
println!("{}", serde_json::to_string(&message)?);
}
}
HostCpuTopologyFormat::KeyValue => {
for cpu in response.cpus {
let kvs = proto2kv(cpu)?;
println!("{}", kv2line(kvs),);
}
}
} }
Ok(()) Ok(())

View File

@ -1,25 +0,0 @@
use anyhow::Result;
use clap::Parser;
use krata::v1::control::{control_service_client::ControlServiceClient, HostStatusRequest};
use tonic::{transport::Channel, Request};
#[derive(Parser)]
#[command(about = "Get information about the host")]
pub struct HostStatusCommand {}
impl HostStatusCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let response = client
.host_status(Request::new(HostStatusRequest {}))
.await?
.into_inner();
println!("Host UUID: {}", response.host_uuid);
println!("Host Domain: {}", response.host_domid);
println!("Krata Version: {}", response.krata_version);
println!("Host IPv4: {}", response.host_ipv4);
println!("Host IPv6: {}", response.host_ipv6);
println!("Host Ethernet Address: {}", response.host_mac);
Ok(())
}
}

View File

@ -7,13 +7,13 @@ use krata::v1::control::control_service_client::ControlServiceClient;
use crate::cli::host::cpu_topology::HostCpuTopologyCommand; use crate::cli::host::cpu_topology::HostCpuTopologyCommand;
use crate::cli::host::hv_console::HostHvConsoleCommand; use crate::cli::host::hv_console::HostHvConsoleCommand;
use crate::cli::host::identify::HostStatusCommand;
use crate::cli::host::idm_snoop::HostIdmSnoopCommand; use crate::cli::host::idm_snoop::HostIdmSnoopCommand;
use crate::cli::host::status::HostStatusCommand;
pub mod cpu_topology; pub mod cpu_topology;
pub mod hv_console; pub mod hv_console;
pub mod identify;
pub mod idm_snoop; pub mod idm_snoop;
pub mod status;
#[derive(Parser)] #[derive(Parser)]
#[command(about = "Manage the host of the isolation engine")] #[command(about = "Manage the host of the isolation engine")]

View File

@ -0,0 +1,60 @@
use anyhow::Result;
use clap::{Parser, ValueEnum};
use krata::v1::control::{control_service_client::ControlServiceClient, GetHostStatusRequest};
use crate::format::{kv2line, proto2dynamic, proto2kv};
use tonic::{transport::Channel, Request};
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum HostStatusFormat {
Simple,
Json,
JsonPretty,
Yaml,
KeyValue,
}
#[derive(Parser)]
#[command(about = "Get information about the host")]
pub struct HostStatusCommand {
#[arg(short, long, default_value = "simple", help = "Output format")]
format: HostStatusFormat,
}
impl HostStatusCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let response = client
.get_host_status(Request::new(GetHostStatusRequest {}))
.await?
.into_inner();
match self.format {
HostStatusFormat::Simple => {
println!("Host UUID: {}", response.host_uuid);
println!("Host Domain: {}", response.host_domid);
println!("Krata Version: {}", response.krata_version);
println!("Host IPv4: {}", response.host_ipv4);
println!("Host IPv6: {}", response.host_ipv6);
println!("Host Ethernet Address: {}", response.host_mac);
}
HostStatusFormat::Json | HostStatusFormat::JsonPretty | HostStatusFormat::Yaml => {
let message = proto2dynamic(response)?;
let value = serde_json::to_value(message)?;
let encoded = if self.format == HostStatusFormat::JsonPretty {
serde_json::to_string_pretty(&value)?
} else if self.format == HostStatusFormat::Yaml {
serde_yaml::to_string(&value)?
} else {
serde_json::to_string(&value)?
};
println!("{}", encoded.trim());
}
HostStatusFormat::KeyValue => {
let kvs = proto2kv(response)?;
println!("{}", kv2line(kvs),);
}
}
Ok(())
}
}

View File

@ -1,6 +1,7 @@
pub mod device; pub mod device;
pub mod host; pub mod host;
pub mod image; pub mod image;
pub mod network;
pub mod zone; pub mod zone;
use crate::cli::device::DeviceCommand; use crate::cli::device::DeviceCommand;
@ -14,6 +15,7 @@ use krata::{
events::EventStream, events::EventStream,
v1::control::{control_service_client::ControlServiceClient, ResolveZoneIdRequest}, v1::control::{control_service_client::ControlServiceClient, ResolveZoneIdRequest},
}; };
use network::NetworkCommand;
use tonic::{transport::Channel, Request}; use tonic::{transport::Channel, Request};
#[derive(Parser)] #[derive(Parser)]
@ -36,6 +38,7 @@ pub struct ControlCommand {
pub enum ControlCommands { pub enum ControlCommands {
Zone(ZoneCommand), Zone(ZoneCommand),
Image(ImageCommand), Image(ImageCommand),
Network(NetworkCommand),
Device(DeviceCommand), Device(DeviceCommand),
Host(HostCommand), Host(HostCommand),
} }
@ -57,6 +60,8 @@ impl ControlCommands {
match self { match self {
ControlCommands::Zone(zone) => zone.run(client, events).await, ControlCommands::Zone(zone) => zone.run(client, events).await,
ControlCommands::Network(network) => network.run(client, events).await,
ControlCommands::Image(image) => image.run(client, events).await, ControlCommands::Image(image) => image.run(client, events).await,
ControlCommands::Device(device) => device.run(client, events).await, ControlCommands::Device(device) => device.run(client, events).await,

View File

@ -0,0 +1,43 @@
use anyhow::Result;
use clap::{Parser, Subcommand};
use reservation::NetworkReservationCommand;
use tonic::transport::Channel;
use krata::events::EventStream;
use krata::v1::control::control_service_client::ControlServiceClient;
pub mod reservation;
#[derive(Parser)]
#[command(about = "Manage the network on the isolation engine")]
pub struct NetworkCommand {
#[command(subcommand)]
subcommand: NetworkCommands,
}
impl NetworkCommand {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
self.subcommand.run(client, events).await
}
}
#[derive(Subcommand)]
pub enum NetworkCommands {
Reservation(NetworkReservationCommand),
}
impl NetworkCommands {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
match self {
NetworkCommands::Reservation(reservation) => reservation.run(client, events).await,
}
}
}

View File

@ -0,0 +1,125 @@
use anyhow::Result;
use clap::{Parser, ValueEnum};
use comfy_table::{presets::UTF8_FULL_CONDENSED, Cell, Table};
use krata::{
events::EventStream,
v1::{
common::NetworkReservation,
control::{control_service_client::ControlServiceClient, ListNetworkReservationsRequest},
},
};
use serde_json::Value;
use tonic::transport::Channel;
use crate::format::{kv2line, proto2dynamic, proto2kv};
#[derive(ValueEnum, Clone, Debug, PartialEq, Eq)]
enum NetworkReservationListFormat {
Table,
Json,
JsonPretty,
Jsonl,
Yaml,
KeyValue,
Simple,
}
#[derive(Parser)]
#[command(about = "List network reservation information")]
pub struct NetworkReservationListCommand {
#[arg(short, long, default_value = "table", help = "Output format")]
format: NetworkReservationListFormat,
}
impl NetworkReservationListCommand {
pub async fn run(
self,
mut client: ControlServiceClient<Channel>,
_events: EventStream,
) -> Result<()> {
let reply = client
.list_network_reservations(ListNetworkReservationsRequest {})
.await?
.into_inner();
let mut reservations = reply.reservations;
reservations.sort_by(|a, b| a.uuid.cmp(&b.uuid));
match self.format {
NetworkReservationListFormat::Table => {
self.print_reservations_table(reservations)?;
}
NetworkReservationListFormat::Simple => {
for reservation in reservations {
println!(
"{}\t{}\t{}\t{}",
reservation.uuid, reservation.ipv4, reservation.ipv6, reservation.mac
);
}
}
NetworkReservationListFormat::Json
| NetworkReservationListFormat::JsonPretty
| NetworkReservationListFormat::Yaml => {
let mut values = Vec::new();
for device in reservations {
let message = proto2dynamic(device)?;
values.push(serde_json::to_value(message)?);
}
let value = Value::Array(values);
let encoded = if self.format == NetworkReservationListFormat::JsonPretty {
serde_json::to_string_pretty(&value)?
} else if self.format == NetworkReservationListFormat::Yaml {
serde_yaml::to_string(&value)?
} else {
serde_json::to_string(&value)?
};
println!("{}", encoded.trim());
}
NetworkReservationListFormat::Jsonl => {
for device in reservations {
let message = proto2dynamic(device)?;
println!("{}", serde_json::to_string(&message)?);
}
}
NetworkReservationListFormat::KeyValue => {
self.print_key_value(reservations)?;
}
}
Ok(())
}
fn print_reservations_table(&self, reservations: Vec<NetworkReservation>) -> Result<()> {
let mut table = Table::new();
table.load_preset(UTF8_FULL_CONDENSED);
table.set_content_arrangement(comfy_table::ContentArrangement::Dynamic);
table.set_header(vec!["uuid", "ipv4", "ipv6", "mac"]);
for reservation in reservations {
table.add_row(vec![
Cell::new(reservation.uuid),
Cell::new(reservation.ipv4),
Cell::new(reservation.ipv6),
Cell::new(reservation.mac),
]);
}
if table.is_empty() {
println!("no network reservations found");
} else {
println!("{}", table);
}
Ok(())
}
fn print_key_value(&self, reservations: Vec<NetworkReservation>) -> Result<()> {
for reservation in reservations {
let kvs = proto2kv(reservation)?;
println!("{}", kv2line(kvs));
}
Ok(())
}
}

View File

@ -0,0 +1,43 @@
use anyhow::Result;
use clap::{Parser, Subcommand};
use list::NetworkReservationListCommand;
use tonic::transport::Channel;
use krata::events::EventStream;
use krata::v1::control::control_service_client::ControlServiceClient;
pub mod list;
#[derive(Parser)]
#[command(about = "Manage network reservations")]
pub struct NetworkReservationCommand {
#[command(subcommand)]
subcommand: NetworkReservationCommands,
}
impl NetworkReservationCommand {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
self.subcommand.run(client, events).await
}
}
#[derive(Subcommand)]
pub enum NetworkReservationCommands {
List(NetworkReservationListCommand),
}
impl NetworkReservationCommands {
pub async fn run(
self,
client: ControlServiceClient<Channel>,
events: EventStream,
) -> Result<()> {
match self {
NetworkReservationCommands::List(list) => list.run(client, events).await,
}
}
}

View File

@ -23,7 +23,7 @@ impl ZoneAttachCommand {
events: EventStream, events: EventStream,
) -> Result<()> { ) -> Result<()> {
let zone_id: String = resolve_zone(&mut client, &self.zone).await?; let zone_id: String = resolve_zone(&mut client, &self.zone).await?;
let input = StdioConsoleStream::stdin_stream(zone_id.clone()).await; let input = StdioConsoleStream::stdin_stream(zone_id.clone(), false).await;
let output = client.attach_zone_console(input).await?.into_inner(); let output = client.attach_zone_console(input).await?.into_inner();
let stdout_handle = let stdout_handle =
tokio::task::spawn(async move { StdioConsoleStream::stdout(output, true).await }); tokio::task::spawn(async move { StdioConsoleStream::stdout(output, true).await });

View File

@ -3,11 +3,13 @@ use std::collections::HashMap;
use anyhow::Result; use anyhow::Result;
use clap::Parser; use clap::Parser;
use crossterm::tty::IsTty;
use krata::v1::{ use krata::v1::{
common::{ZoneTaskSpec, ZoneTaskSpecEnvVar}, common::{TerminalSize, ZoneTaskSpec, ZoneTaskSpecEnvVar},
control::{control_service_client::ControlServiceClient, ExecInsideZoneRequest}, control::{control_service_client::ControlServiceClient, ExecInsideZoneRequest},
}; };
use tokio::io::stdin;
use tonic::{transport::Channel, Request}; use tonic::{transport::Channel, Request};
use crate::console::StdioConsoleStream; use crate::console::StdioConsoleStream;
@ -36,6 +38,7 @@ pub struct ZoneExecCommand {
impl ZoneExecCommand { impl ZoneExecCommand {
pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> { pub async fn run(self, mut client: ControlServiceClient<Channel>) -> Result<()> {
let zone_id: String = resolve_zone(&mut client, &self.zone).await?; let zone_id: String = resolve_zone(&mut client, &self.zone).await?;
let should_map_tty = self.tty && stdin().is_tty();
let initial = ExecInsideZoneRequest { let initial = ExecInsideZoneRequest {
zone_id, zone_id,
task: Some(ZoneTaskSpec { task: Some(ZoneTaskSpec {
@ -52,16 +55,25 @@ impl ZoneExecCommand {
}), }),
stdin: vec![], stdin: vec![],
stdin_closed: false, stdin_closed: false,
terminal_size: if should_map_tty {
let size = crossterm::terminal::size().ok();
size.map(|(columns, rows)| TerminalSize {
rows: rows as u32,
columns: columns as u32,
})
} else {
None
},
}; };
let stream = StdioConsoleStream::stdin_stream_exec(initial).await; let stream = StdioConsoleStream::input_stream_exec(initial, should_map_tty).await;
let response = client let response = client
.exec_inside_zone(Request::new(stream)) .exec_inside_zone(Request::new(stream))
.await? .await?
.into_inner(); .into_inner();
let code = StdioConsoleStream::exec_output(response, self.tty).await?; let code = StdioConsoleStream::exec_output(response, should_map_tty).await?;
std::process::exit(code); std::process::exit(code);
} }
} }

View File

@ -187,7 +187,7 @@ impl ZoneLaunchCommand {
} }
let code = if self.attach { let code = if self.attach {
let input = StdioConsoleStream::stdin_stream(id.clone()).await; let input = StdioConsoleStream::stdin_stream(id.clone(), true).await;
let output = client.attach_zone_console(input).await?.into_inner(); let output = client.attach_zone_console(input).await?.into_inner();
let stdout_handle = let stdout_handle =
tokio::task::spawn(async move { StdioConsoleStream::stdout(output, true).await }); tokio::task::spawn(async move { StdioConsoleStream::stdout(output, true).await });

View File

@ -29,7 +29,7 @@ enum ZoneListFormat {
} }
#[derive(Parser)] #[derive(Parser)]
#[command(about = "List the zones on the isolation engine")] #[command(about = "List zone information")]
pub struct ZoneListCommand { pub struct ZoneListCommand {
#[arg(short, long, default_value = "table", help = "Output format")] #[arg(short, long, default_value = "table", help = "Output format")]
format: ZoneListFormat, format: ZoneListFormat,

View File

@ -33,7 +33,7 @@ impl ZoneLogsCommand {
let zone_id_stream = zone_id.clone(); let zone_id_stream = zone_id.clone();
let follow = self.follow; let follow = self.follow;
let input = stream! { let input = stream! {
yield ZoneConsoleRequest { zone_id: zone_id_stream, data: Vec::new() }; yield ZoneConsoleRequest { zone_id: zone_id_stream, replay_history: true, data: Vec::new() };
if follow { if follow {
let mut pending = pending::<ZoneConsoleRequest>(); let mut pending = pending::<ZoneConsoleRequest>();
while let Some(x) = pending.next().await { while let Some(x) = pending.next().await {

View File

@ -1,4 +1,4 @@
use anyhow::{anyhow, Result}; use anyhow::Result;
use async_stream::stream; use async_stream::stream;
use crossterm::{ use crossterm::{
terminal::{disable_raw_mode, enable_raw_mode, is_raw_mode_enabled}, terminal::{disable_raw_mode, enable_raw_mode, is_raw_mode_enabled},
@ -7,6 +7,7 @@ use crossterm::{
use krata::v1::common::ZoneState; use krata::v1::common::ZoneState;
use krata::{ use krata::{
events::EventStream, events::EventStream,
v1::common::TerminalSize,
v1::control::{ v1::control::{
watch_events_reply::Event, ExecInsideZoneReply, ExecInsideZoneRequest, ZoneConsoleReply, watch_events_reply::Event, ExecInsideZoneReply, ExecInsideZoneRequest, ZoneConsoleReply,
ZoneConsoleRequest, ZoneConsoleRequest,
@ -15,6 +16,7 @@ use krata::{
use log::debug; use log::debug;
use tokio::{ use tokio::{
io::{stderr, stdin, stdout, AsyncReadExt, AsyncWriteExt}, io::{stderr, stdin, stdout, AsyncReadExt, AsyncWriteExt},
select,
task::JoinHandle, task::JoinHandle,
}; };
use tokio_stream::{Stream, StreamExt}; use tokio_stream::{Stream, StreamExt};
@ -22,11 +24,19 @@ use tonic::Streaming;
pub struct StdioConsoleStream; pub struct StdioConsoleStream;
enum ExecStdinSelect {
DataRead(std::io::Result<usize>),
TerminalResize,
}
impl StdioConsoleStream { impl StdioConsoleStream {
pub async fn stdin_stream(zone: String) -> impl Stream<Item = ZoneConsoleRequest> { pub async fn stdin_stream(
zone: String,
replay_history: bool,
) -> impl Stream<Item = ZoneConsoleRequest> {
let mut stdin = stdin(); let mut stdin = stdin();
stream! { stream! {
yield ZoneConsoleRequest { zone_id: zone, data: vec![] }; yield ZoneConsoleRequest { zone_id: zone, replay_history, data: vec![] };
let mut buffer = vec![0u8; 60]; let mut buffer = vec![0u8; 60];
loop { loop {
@ -41,35 +51,111 @@ impl StdioConsoleStream {
if size == 1 && buffer[0] == 0x1d { if size == 1 && buffer[0] == 0x1d {
break; break;
} }
yield ZoneConsoleRequest { zone_id: String::default(), data }; yield ZoneConsoleRequest { zone_id: String::default(), replay_history, data };
} }
} }
} }
pub async fn stdin_stream_exec( #[cfg(unix)]
pub async fn input_stream_exec(
initial: ExecInsideZoneRequest, initial: ExecInsideZoneRequest,
tty: bool,
) -> impl Stream<Item = ExecInsideZoneRequest> { ) -> impl Stream<Item = ExecInsideZoneRequest> {
let mut stdin = stdin(); let mut stdin = stdin();
stream! { stream! {
yield initial; yield initial;
let mut buffer = vec![0u8; 60]; let mut buffer = vec![0u8; 60];
let mut terminal_size_change = if tty {
tokio::signal::unix::signal(tokio::signal::unix::SignalKind::window_change()).ok()
} else {
None
};
let mut stdin_closed = false;
loop { loop {
let size = match stdin.read(&mut buffer).await { let selected = if let Some(ref mut terminal_size_change) = terminal_size_change {
Ok(size) => size, if stdin_closed {
Err(error) => { select! {
debug!("failed to read stdin: {}", error); _ = terminal_size_change.recv() => ExecStdinSelect::TerminalResize,
break; }
} else {
select! {
result = stdin.read(&mut buffer) => ExecStdinSelect::DataRead(result),
_ = terminal_size_change.recv() => ExecStdinSelect::TerminalResize,
}
}
} else {
select! {
result = stdin.read(&mut buffer) => ExecStdinSelect::DataRead(result),
} }
}; };
let stdin = buffer[0..size].to_vec();
if size == 1 && buffer[0] == 0x1d { match selected {
break; ExecStdinSelect::DataRead(result) => {
match result {
Ok(size) => {
let stdin = buffer[0..size].to_vec();
if size == 1 && buffer[0] == 0x1d {
break;
}
stdin_closed = size == 0;
yield ExecInsideZoneRequest { zone_id: String::default(), task: None, terminal_size: None, stdin, stdin_closed, };
},
Err(error) => {
debug!("failed to read stdin: {}", error);
break;
}
}
},
ExecStdinSelect::TerminalResize => {
if let Ok((columns, rows)) = crossterm::terminal::size() {
yield ExecInsideZoneRequest { zone_id: String::default(), task: None, terminal_size: Some(TerminalSize {
rows: rows as u32,
columns: columns as u32,
}), stdin: vec![], stdin_closed: false, };
}
}
} }
let stdin_closed = size == 0; }
yield ExecInsideZoneRequest { zone_id: String::default(), task: None, stdin, stdin_closed, }; }
if stdin_closed { }
break;
#[cfg(not(unix))]
pub async fn input_stream_exec(
initial: ExecInsideZoneRequest,
_tty: bool,
) -> impl Stream<Item = ExecInsideZoneRequest> {
let mut stdin = stdin();
stream! {
yield initial;
let mut buffer = vec![0u8; 60];
let mut stdin_closed = false;
loop {
let selected = select! {
result = stdin.read(&mut buffer) => ExecStdinSelect::DataRead(result),
};
match selected {
ExecStdinSelect::DataRead(result) => {
match result {
Ok(size) => {
let stdin = buffer[0..size].to_vec();
if size == 1 && buffer[0] == 0x1d {
break;
}
stdin_closed = size == 0;
yield ExecInsideZoneRequest { zone_id: String::default(), task: None, terminal_size: None, stdin, stdin_closed, };
},
Err(error) => {
debug!("failed to read stdin: {}", error);
break;
}
}
},
_ => {
continue;
}
} }
} }
} }
@ -93,7 +179,7 @@ impl StdioConsoleStream {
} }
pub async fn exec_output(mut stream: Streaming<ExecInsideZoneReply>, raw: bool) -> Result<i32> { pub async fn exec_output(mut stream: Streaming<ExecInsideZoneReply>, raw: bool) -> Result<i32> {
if raw && stdin().is_tty() { if raw {
enable_raw_mode()?; enable_raw_mode()?;
StdioConsoleStream::register_terminal_restore_hook()?; StdioConsoleStream::register_terminal_restore_hook()?;
} }
@ -115,7 +201,12 @@ impl StdioConsoleStream {
return if reply.error.is_empty() { return if reply.error.is_empty() {
Ok(reply.exit_code) Ok(reply.exit_code)
} else { } else {
Err(anyhow!("exec failed: {}", reply.error)) StdioConsoleStream::restore_terminal_mode();
stderr
.write_all(format!("Error: exec failed: {}\n", reply.error).as_bytes())
.await?;
stderr.flush().await?;
Ok(-1)
}; };
} }
} }

View File

@ -19,9 +19,9 @@ clap = { workspace = true }
env_logger = { workspace = true } env_logger = { workspace = true }
futures = { workspace = true } futures = { workspace = true }
ipnetwork = { workspace = true } ipnetwork = { workspace = true }
krata = { path = "../krata", version = "^0.0.18" } krata = { path = "../krata", version = "^0.0.20" }
krata-oci = { path = "../oci", version = "^0.0.18" } krata-oci = { path = "../oci", version = "^0.0.20" }
krata-runtime = { path = "../runtime", version = "^0.0.18" } krata-runtime = { path = "../runtime", version = "^0.0.20" }
log = { workspace = true } log = { workspace = true }
prost = { workspace = true } prost = { workspace = true }
redb = { workspace = true } redb = { workspace = true }

View File

@ -112,13 +112,13 @@ fn default_network_ipv6_subnet() -> String {
impl DaemonConfig { impl DaemonConfig {
pub async fn load(path: &Path) -> Result<DaemonConfig> { pub async fn load(path: &Path) -> Result<DaemonConfig> {
if path.exists() { if !path.exists() {
let content = fs::read_to_string(path).await?; let config: DaemonConfig = toml::from_str("")?;
let config: DaemonConfig = toml::from_str(&content)?; let content = toml::to_string_pretty(&config)?;
Ok(config) fs::write(&path, content).await?;
} else {
fs::write(&path, "").await?;
Ok(DaemonConfig::default())
} }
let content = fs::read_to_string(path).await?;
let config: DaemonConfig = toml::from_str(&content)?;
Ok(config)
} }
} }

View File

@ -1,753 +0,0 @@
use crate::db::zone::ZoneStore;
use crate::ip::assignment::IpAssignment;
use crate::{
command::DaemonCommand, console::DaemonConsoleHandle, devices::DaemonDeviceManager,
event::DaemonEventContext, idm::DaemonIdmHandle, metrics::idm_metric_to_api,
oci::convert_oci_progress, zlt::ZoneLookupTable,
};
use async_stream::try_stream;
use futures::Stream;
use krata::v1::common::ZoneResourceStatus;
use krata::v1::control::{
GetZoneReply, GetZoneRequest, SetHostPowerManagementPolicyReply,
SetHostPowerManagementPolicyRequest,
};
use krata::{
idm::internal::{
exec_stream_request_update::Update, request::Request as IdmRequestType,
response::Response as IdmResponseType, ExecEnvVar, ExecStreamRequestStart,
ExecStreamRequestStdin, ExecStreamRequestUpdate, MetricsRequest, Request as IdmRequest,
},
v1::{
common::{OciImageFormat, Zone, ZoneState, ZoneStatus},
control::{
control_service_server::ControlService, CreateZoneReply, CreateZoneRequest,
DestroyZoneReply, DestroyZoneRequest, DeviceInfo, ExecInsideZoneReply,
ExecInsideZoneRequest, GetHostCpuTopologyReply, GetHostCpuTopologyRequest,
HostCpuTopologyInfo, HostStatusReply, HostStatusRequest, ListDevicesReply,
ListDevicesRequest, ListZonesReply, ListZonesRequest, PullImageReply, PullImageRequest,
ReadHypervisorConsoleReply, ReadHypervisorConsoleRequest, ReadZoneMetricsReply,
ReadZoneMetricsRequest, ResolveZoneIdReply, ResolveZoneIdRequest, SnoopIdmReply,
SnoopIdmRequest, UpdateZoneResourcesReply, UpdateZoneResourcesRequest,
WatchEventsReply, WatchEventsRequest, ZoneConsoleReply, ZoneConsoleRequest,
},
},
};
use krataoci::{
name::ImageName,
packer::{service::OciPackerService, OciPackedFormat, OciPackedImage},
progress::{OciProgress, OciProgressContext},
};
use kratart::Runtime;
use std::{pin::Pin, str::FromStr};
use tokio::{
select,
sync::mpsc::{channel, Sender},
task::JoinError,
};
use tokio_stream::StreamExt;
use tonic::{Request, Response, Status, Streaming};
use uuid::Uuid;
pub struct ApiError {
message: String,
}
impl From<anyhow::Error> for ApiError {
fn from(value: anyhow::Error) -> Self {
ApiError {
message: value.to_string(),
}
}
}
impl From<ApiError> for Status {
fn from(value: ApiError) -> Self {
Status::unknown(value.message)
}
}
#[derive(Clone)]
pub struct DaemonControlService {
zlt: ZoneLookupTable,
devices: DaemonDeviceManager,
events: DaemonEventContext,
console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
zones: ZoneStore,
ip: IpAssignment,
zone_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
runtime: Runtime,
}
impl DaemonControlService {
#[allow(clippy::too_many_arguments)]
pub fn new(
zlt: ZoneLookupTable,
devices: DaemonDeviceManager,
events: DaemonEventContext,
console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
zones: ZoneStore,
ip: IpAssignment,
zone_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
runtime: Runtime,
) -> Self {
Self {
zlt,
devices,
events,
console,
idm,
zones,
ip,
zone_reconciler_notify,
packer,
runtime,
}
}
}
enum ConsoleDataSelect {
Read(Option<Vec<u8>>),
Write(Option<Result<ZoneConsoleRequest, Status>>),
}
enum PullImageSelect {
Progress(Option<OciProgress>),
Completed(Result<Result<OciPackedImage, anyhow::Error>, JoinError>),
}
#[tonic::async_trait]
impl ControlService for DaemonControlService {
type ExecInsideZoneStream =
Pin<Box<dyn Stream<Item = Result<ExecInsideZoneReply, Status>> + Send + 'static>>;
type AttachZoneConsoleStream =
Pin<Box<dyn Stream<Item = Result<ZoneConsoleReply, Status>> + Send + 'static>>;
type PullImageStream =
Pin<Box<dyn Stream<Item = Result<PullImageReply, Status>> + Send + 'static>>;
type WatchEventsStream =
Pin<Box<dyn Stream<Item = Result<WatchEventsReply, Status>> + Send + 'static>>;
type SnoopIdmStream =
Pin<Box<dyn Stream<Item = Result<SnoopIdmReply, Status>> + Send + 'static>>;
async fn host_status(
&self,
request: Request<HostStatusRequest>,
) -> Result<Response<HostStatusReply>, Status> {
let _ = request.into_inner();
let host_reservation =
self.ip
.retrieve(self.zlt.host_uuid())
.await
.map_err(|x| ApiError {
message: x.to_string(),
})?;
Ok(Response::new(HostStatusReply {
host_domid: self.zlt.host_domid(),
host_uuid: self.zlt.host_uuid().to_string(),
krata_version: DaemonCommand::version(),
host_ipv4: host_reservation
.as_ref()
.map(|x| format!("{}/{}", x.ipv4, x.ipv4_prefix))
.unwrap_or_default(),
host_ipv6: host_reservation
.as_ref()
.map(|x| format!("{}/{}", x.ipv6, x.ipv6_prefix))
.unwrap_or_default(),
host_mac: host_reservation
.as_ref()
.map(|x| x.mac.to_string().to_lowercase().replace('-', ":"))
.unwrap_or_default(),
}))
}
async fn create_zone(
&self,
request: Request<CreateZoneRequest>,
) -> Result<Response<CreateZoneReply>, Status> {
let request = request.into_inner();
let Some(spec) = request.spec else {
return Err(ApiError {
message: "zone spec not provided".to_string(),
}
.into());
};
let uuid = Uuid::new_v4();
self.zones
.update(
uuid,
Zone {
id: uuid.to_string(),
status: Some(ZoneStatus {
state: ZoneState::Creating.into(),
network_status: None,
exit_status: None,
error_status: None,
resource_status: None,
host: self.zlt.host_uuid().to_string(),
domid: u32::MAX,
}),
spec: Some(spec),
},
)
.await
.map_err(ApiError::from)?;
self.zone_reconciler_notify
.send(uuid)
.await
.map_err(|x| ApiError {
message: x.to_string(),
})?;
Ok(Response::new(CreateZoneReply {
zone_id: uuid.to_string(),
}))
}
async fn exec_inside_zone(
&self,
request: Request<Streaming<ExecInsideZoneRequest>>,
) -> Result<Response<Self::ExecInsideZoneStream>, Status> {
let mut input = request.into_inner();
let Some(request) = input.next().await else {
return Err(ApiError {
message: "expected to have at least one request".to_string(),
}
.into());
};
let request = request?;
let Some(task) = request.task else {
return Err(ApiError {
message: "task is missing".to_string(),
}
.into());
};
let uuid = Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let idm = self.idm.client(uuid).await.map_err(|error| ApiError {
message: error.to_string(),
})?;
let idm_request = IdmRequest {
request: Some(IdmRequestType::ExecStream(ExecStreamRequestUpdate {
update: Some(Update::Start(ExecStreamRequestStart {
environment: task
.environment
.into_iter()
.map(|x| ExecEnvVar {
key: x.key,
value: x.value,
})
.collect(),
command: task.command,
working_directory: task.working_directory,
tty: task.tty,
})),
})),
};
let output = try_stream! {
let mut handle = idm.send_stream(idm_request).await.map_err(|x| ApiError {
message: x.to_string(),
})?;
loop {
select! {
x = input.next() => if let Some(update) = x {
let update: Result<ExecInsideZoneRequest, Status> = update.map_err(|error| ApiError {
message: error.to_string()
}.into());
if let Ok(update) = update {
if !update.stdin.is_empty() {
let _ = handle.update(IdmRequest {
request: Some(IdmRequestType::ExecStream(ExecStreamRequestUpdate {
update: Some(Update::Stdin(ExecStreamRequestStdin {
data: update.stdin,
closed: update.stdin_closed,
})),
}))}).await;
}
}
},
x = handle.receiver.recv() => match x {
Some(response) => {
let Some(IdmResponseType::ExecStream(update)) = response.response else {
break;
};
let reply = ExecInsideZoneReply {
exited: update.exited,
error: update.error,
exit_code: update.exit_code,
stdout: update.stdout,
stderr: update.stderr,
};
yield reply;
},
None => {
break;
}
}
}
}
};
Ok(Response::new(Box::pin(output) as Self::ExecInsideZoneStream))
}
async fn destroy_zone(
&self,
request: Request<DestroyZoneRequest>,
) -> Result<Response<DestroyZoneReply>, Status> {
let request = request.into_inner();
let uuid = Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let Some(mut zone) = self.zones.read(uuid).await.map_err(ApiError::from)? else {
return Err(ApiError {
message: "zone not found".to_string(),
}
.into());
};
zone.status = Some(zone.status.as_mut().cloned().unwrap_or_default());
if zone.status.as_ref().unwrap().state() == ZoneState::Destroyed {
return Err(ApiError {
message: "zone already destroyed".to_string(),
}
.into());
}
zone.status.as_mut().unwrap().state = ZoneState::Destroying.into();
self.zones
.update(uuid, zone)
.await
.map_err(ApiError::from)?;
self.zone_reconciler_notify
.send(uuid)
.await
.map_err(|x| ApiError {
message: x.to_string(),
})?;
Ok(Response::new(DestroyZoneReply {}))
}
async fn list_zones(
&self,
request: Request<ListZonesRequest>,
) -> Result<Response<ListZonesReply>, Status> {
let _ = request.into_inner();
let zones = self.zones.list().await.map_err(ApiError::from)?;
let zones = zones.into_values().collect::<Vec<Zone>>();
Ok(Response::new(ListZonesReply { zones }))
}
async fn resolve_zone_id(
&self,
request: Request<ResolveZoneIdRequest>,
) -> Result<Response<ResolveZoneIdReply>, Status> {
let request = request.into_inner();
let zones = self.zones.list().await.map_err(ApiError::from)?;
let zones = zones
.into_values()
.filter(|x| {
let comparison_spec = x.spec.as_ref().cloned().unwrap_or_default();
(!request.name.is_empty() && comparison_spec.name == request.name)
|| x.id == request.name
})
.collect::<Vec<Zone>>();
Ok(Response::new(ResolveZoneIdReply {
zone_id: zones.first().cloned().map(|x| x.id).unwrap_or_default(),
}))
}
async fn attach_zone_console(
&self,
request: Request<Streaming<ZoneConsoleRequest>>,
) -> Result<Response<Self::AttachZoneConsoleStream>, Status> {
let mut input = request.into_inner();
let Some(request) = input.next().await else {
return Err(ApiError {
message: "expected to have at least one request".to_string(),
}
.into());
};
let request = request?;
let uuid = Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let (sender, mut receiver) = channel(100);
let console = self
.console
.attach(uuid, sender)
.await
.map_err(|error| ApiError {
message: format!("failed to attach to console: {}", error),
})?;
let output = try_stream! {
yield ZoneConsoleReply { data: console.initial.clone(), };
loop {
let what = select! {
x = receiver.recv() => ConsoleDataSelect::Read(x),
x = input.next() => ConsoleDataSelect::Write(x),
};
match what {
ConsoleDataSelect::Read(Some(data)) => {
yield ZoneConsoleReply { data, };
},
ConsoleDataSelect::Read(None) => {
break;
}
ConsoleDataSelect::Write(Some(request)) => {
let request = request?;
if !request.data.is_empty() {
console.send(request.data).await.map_err(|error| ApiError {
message: error.to_string(),
})?;
}
},
ConsoleDataSelect::Write(None) => {
break;
}
}
}
};
Ok(Response::new(
Box::pin(output) as Self::AttachZoneConsoleStream
))
}
async fn read_zone_metrics(
&self,
request: Request<ReadZoneMetricsRequest>,
) -> Result<Response<ReadZoneMetricsReply>, Status> {
let request = request.into_inner();
let uuid = Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let client = self.idm.client(uuid).await.map_err(|error| ApiError {
message: error.to_string(),
})?;
let response = client
.send(IdmRequest {
request: Some(IdmRequestType::Metrics(MetricsRequest {})),
})
.await
.map_err(|error| ApiError {
message: error.to_string(),
})?;
let mut reply = ReadZoneMetricsReply::default();
if let Some(IdmResponseType::Metrics(metrics)) = response.response {
reply.root = metrics.root.map(idm_metric_to_api);
}
Ok(Response::new(reply))
}
async fn pull_image(
&self,
request: Request<PullImageRequest>,
) -> Result<Response<Self::PullImageStream>, Status> {
let request = request.into_inner();
let name = ImageName::parse(&request.image).map_err(|err| ApiError {
message: err.to_string(),
})?;
let format = match request.format() {
OciImageFormat::Unknown => OciPackedFormat::Squashfs,
OciImageFormat::Squashfs => OciPackedFormat::Squashfs,
OciImageFormat::Erofs => OciPackedFormat::Erofs,
OciImageFormat::Tar => OciPackedFormat::Tar,
};
let (context, mut receiver) = OciProgressContext::create();
let our_packer = self.packer.clone();
let output = try_stream! {
let mut task = tokio::task::spawn(async move {
our_packer.request(name, format, request.overwrite_cache, request.update, context).await
});
let abort_handle = task.abort_handle();
let _task_cancel_guard = scopeguard::guard(abort_handle, |handle| {
handle.abort();
});
loop {
let what = select! {
x = receiver.changed() => match x {
Ok(_) => PullImageSelect::Progress(Some(receiver.borrow_and_update().clone())),
Err(_) => PullImageSelect::Progress(None),
},
x = &mut task => PullImageSelect::Completed(x),
};
match what {
PullImageSelect::Progress(Some(progress)) => {
let reply = PullImageReply {
progress: Some(convert_oci_progress(progress)),
digest: String::new(),
format: OciImageFormat::Unknown.into(),
};
yield reply;
},
PullImageSelect::Completed(result) => {
let result = result.map_err(|err| ApiError {
message: err.to_string(),
})?;
let packed = result.map_err(|err| ApiError {
message: err.to_string(),
})?;
let reply = PullImageReply {
progress: None,
digest: packed.digest,
format: match packed.format {
OciPackedFormat::Squashfs => OciImageFormat::Squashfs.into(),
OciPackedFormat::Erofs => OciImageFormat::Erofs.into(),
OciPackedFormat::Tar => OciImageFormat::Tar.into(),
},
};
yield reply;
break;
},
_ => {
continue;
}
}
}
};
Ok(Response::new(Box::pin(output) as Self::PullImageStream))
}
async fn watch_events(
&self,
request: Request<WatchEventsRequest>,
) -> Result<Response<Self::WatchEventsStream>, Status> {
let _ = request.into_inner();
let mut events = self.events.subscribe();
let output = try_stream! {
while let Ok(event) = events.recv().await {
yield WatchEventsReply { event: Some(event), };
}
};
Ok(Response::new(Box::pin(output) as Self::WatchEventsStream))
}
async fn snoop_idm(
&self,
request: Request<SnoopIdmRequest>,
) -> Result<Response<Self::SnoopIdmStream>, Status> {
let _ = request.into_inner();
let mut messages = self.idm.snoop();
let zlt = self.zlt.clone();
let output = try_stream! {
while let Ok(event) = messages.recv().await {
let Some(from_uuid) = zlt.lookup_uuid_by_domid(event.from).await else {
continue;
};
let Some(to_uuid) = zlt.lookup_uuid_by_domid(event.to).await else {
continue;
};
yield SnoopIdmReply { from: from_uuid.to_string(), to: to_uuid.to_string(), packet: Some(event.packet) };
}
};
Ok(Response::new(Box::pin(output) as Self::SnoopIdmStream))
}
async fn list_devices(
&self,
request: Request<ListDevicesRequest>,
) -> Result<Response<ListDevicesReply>, Status> {
let _ = request.into_inner();
let mut devices = Vec::new();
let state = self.devices.copy().await.map_err(|error| ApiError {
message: error.to_string(),
})?;
for (name, state) in state {
devices.push(DeviceInfo {
name,
claimed: state.owner.is_some(),
owner: state.owner.map(|x| x.to_string()).unwrap_or_default(),
});
}
Ok(Response::new(ListDevicesReply { devices }))
}
async fn get_host_cpu_topology(
&self,
request: Request<GetHostCpuTopologyRequest>,
) -> Result<Response<GetHostCpuTopologyReply>, Status> {
let _ = request.into_inner();
let power = self
.runtime
.power_management_context()
.await
.map_err(ApiError::from)?;
let cputopo = power.cpu_topology().await.map_err(ApiError::from)?;
let mut cpus = vec![];
for cpu in cputopo {
cpus.push(HostCpuTopologyInfo {
core: cpu.core,
socket: cpu.socket,
node: cpu.node,
thread: cpu.thread,
class: cpu.class as i32,
})
}
Ok(Response::new(GetHostCpuTopologyReply { cpus }))
}
async fn set_host_power_management_policy(
&self,
request: Request<SetHostPowerManagementPolicyRequest>,
) -> Result<Response<SetHostPowerManagementPolicyReply>, Status> {
let policy = request.into_inner();
let power = self
.runtime
.power_management_context()
.await
.map_err(ApiError::from)?;
let scheduler = &policy.scheduler;
power
.set_smt_policy(policy.smt_awareness)
.await
.map_err(ApiError::from)?;
power
.set_scheduler_policy(scheduler)
.await
.map_err(ApiError::from)?;
Ok(Response::new(SetHostPowerManagementPolicyReply {}))
}
async fn get_zone(
&self,
request: Request<GetZoneRequest>,
) -> Result<Response<GetZoneReply>, Status> {
let request = request.into_inner();
let zones = self.zones.list().await.map_err(ApiError::from)?;
let zone = zones.get(&Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?);
Ok(Response::new(GetZoneReply {
zone: zone.cloned(),
}))
}
async fn update_zone_resources(
&self,
request: Request<UpdateZoneResourcesRequest>,
) -> Result<Response<UpdateZoneResourcesReply>, Status> {
let request = request.into_inner();
let uuid = Uuid::from_str(&request.zone_id).map_err(|error| ApiError {
message: error.to_string(),
})?;
let Some(mut zone) = self.zones.read(uuid).await.map_err(ApiError::from)? else {
return Err(ApiError {
message: "zone not found".to_string(),
}
.into());
};
let Some(ref mut status) = zone.status else {
return Err(ApiError {
message: "zone state not available".to_string(),
}
.into());
};
if status.state() != ZoneState::Created {
return Err(ApiError {
message: "zone is in an invalid state".to_string(),
}
.into());
}
if status.domid == 0 || status.domid == u32::MAX {
return Err(ApiError {
message: "zone domid is invalid".to_string(),
}
.into());
}
let mut resources = request.resources.unwrap_or_default();
if resources.target_memory > resources.max_memory {
resources.max_memory = resources.target_memory;
}
if resources.target_cpus < 1 {
resources.target_cpus = 1;
}
let initial_resources = zone
.spec
.clone()
.unwrap_or_default()
.initial_resources
.unwrap_or_default();
if resources.target_cpus > initial_resources.max_cpus {
resources.target_cpus = initial_resources.max_cpus;
}
resources.max_cpus = initial_resources.max_cpus;
self.runtime
.set_memory_resources(
status.domid,
resources.target_memory * 1024 * 1024,
resources.max_memory * 1024 * 1024,
)
.await
.map_err(|error| ApiError {
message: format!("failed to set memory resources: {}", error),
})?;
self.runtime
.set_cpu_resources(status.domid, resources.target_cpus)
.await
.map_err(|error| ApiError {
message: format!("failed to set cpu resources: {}", error),
})?;
status.resource_status = Some(ZoneResourceStatus {
active_resources: Some(resources),
});
self.zones
.update(uuid, zone)
.await
.map_err(ApiError::from)?;
Ok(Response::new(UpdateZoneResourcesReply {}))
}
async fn read_hypervisor_console(
&self,
_request: Request<ReadHypervisorConsoleRequest>,
) -> Result<Response<ReadHypervisorConsoleReply>, Status> {
let data = self
.runtime
.read_hypervisor_console(false)
.await
.map_err(|error| ApiError {
message: error.to_string(),
})?;
Ok(Response::new(ReadHypervisorConsoleReply {
data: data.to_string(),
}))
}
}

View File

@ -0,0 +1,84 @@
use std::pin::Pin;
use std::str::FromStr;
use anyhow::{anyhow, Result};
use async_stream::try_stream;
use tokio::select;
use tokio::sync::mpsc::channel;
use tokio_stream::{Stream, StreamExt};
use tonic::{Status, Streaming};
use uuid::Uuid;
use krata::v1::control::{ZoneConsoleReply, ZoneConsoleRequest};
use crate::console::DaemonConsoleHandle;
use crate::control::ApiError;
enum ConsoleDataSelect {
Read(Option<Vec<u8>>),
Write(Option<Result<ZoneConsoleRequest, Status>>),
}
pub struct AttachZoneConsoleRpc {
console: DaemonConsoleHandle,
}
impl AttachZoneConsoleRpc {
pub fn new(console: DaemonConsoleHandle) -> Self {
Self { console }
}
pub async fn process(
self,
mut input: Streaming<ZoneConsoleRequest>,
) -> Result<Pin<Box<dyn Stream<Item = Result<ZoneConsoleReply, Status>> + Send + 'static>>>
{
let Some(request) = input.next().await else {
return Err(anyhow!("expected to have at least one request"));
};
let request = request?;
let uuid = Uuid::from_str(&request.zone_id)?;
let (sender, mut receiver) = channel(100);
let console = self
.console
.attach(uuid, sender)
.await
.map_err(|error| anyhow!("failed to attach to console: {}", error))?;
let output = try_stream! {
if request.replay_history {
yield ZoneConsoleReply { data: console.initial.clone(), };
}
loop {
let what = select! {
x = receiver.recv() => ConsoleDataSelect::Read(x),
x = input.next() => ConsoleDataSelect::Write(x),
};
match what {
ConsoleDataSelect::Read(Some(data)) => {
yield ZoneConsoleReply { data, };
},
ConsoleDataSelect::Read(None) => {
break;
}
ConsoleDataSelect::Write(Some(request)) => {
let request = request?;
if !request.data.is_empty() {
console.send(request.data).await.map_err(|error| ApiError {
message: error.to_string(),
})?;
}
},
ConsoleDataSelect::Write(None) => {
break;
}
}
}
};
Ok(Box::pin(output))
}
}

View File

@ -0,0 +1,56 @@
use crate::db::zone::ZoneStore;
use crate::zlt::ZoneLookupTable;
use anyhow::{anyhow, Result};
use krata::v1::common::{Zone, ZoneState, ZoneStatus};
use krata::v1::control::{CreateZoneReply, CreateZoneRequest};
use tokio::sync::mpsc::Sender;
use uuid::Uuid;
pub struct CreateZoneRpc {
zones: ZoneStore,
zlt: ZoneLookupTable,
zone_reconciler_notify: Sender<Uuid>,
}
impl CreateZoneRpc {
pub fn new(
zones: ZoneStore,
zlt: ZoneLookupTable,
zone_reconciler_notify: Sender<Uuid>,
) -> Self {
Self {
zones,
zlt,
zone_reconciler_notify,
}
}
pub async fn process(self, request: CreateZoneRequest) -> Result<CreateZoneReply> {
let Some(spec) = request.spec else {
return Err(anyhow!("zone spec not provided"));
};
let uuid = Uuid::new_v4();
self.zones
.update(
uuid,
Zone {
id: uuid.to_string(),
status: Some(ZoneStatus {
state: ZoneState::Creating.into(),
network_status: None,
exit_status: None,
error_status: None,
resource_status: None,
host: self.zlt.host_uuid().to_string(),
domid: u32::MAX,
}),
spec: Some(spec),
},
)
.await?;
self.zone_reconciler_notify.send(uuid).await?;
Ok(CreateZoneReply {
zone_id: uuid.to_string(),
})
}
}

View File

@ -0,0 +1,42 @@
use std::str::FromStr;
use anyhow::{anyhow, Result};
use tokio::sync::mpsc::Sender;
use uuid::Uuid;
use krata::v1::common::ZoneState;
use krata::v1::control::{DestroyZoneReply, DestroyZoneRequest};
use crate::db::zone::ZoneStore;
pub struct DestroyZoneRpc {
zones: ZoneStore,
zone_reconciler_notify: Sender<Uuid>,
}
impl DestroyZoneRpc {
pub fn new(zones: ZoneStore, zone_reconciler_notify: Sender<Uuid>) -> Self {
Self {
zones,
zone_reconciler_notify,
}
}
pub async fn process(self, request: DestroyZoneRequest) -> Result<DestroyZoneReply> {
let uuid = Uuid::from_str(&request.zone_id)?;
let Some(mut zone) = self.zones.read(uuid).await? else {
return Err(anyhow!("zone not found"));
};
zone.status = Some(zone.status.as_mut().cloned().unwrap_or_default());
if zone.status.as_ref().unwrap().state() == ZoneState::Destroyed {
return Err(anyhow!("zone already destroyed"));
}
zone.status.as_mut().unwrap().state = ZoneState::Destroying.into();
self.zones.update(uuid, zone).await?;
self.zone_reconciler_notify.send(uuid).await?;
Ok(DestroyZoneReply {})
}
}

View File

@ -0,0 +1,133 @@
use std::pin::Pin;
use std::str::FromStr;
use anyhow::{anyhow, Result};
use async_stream::try_stream;
use tokio::select;
use tokio_stream::{Stream, StreamExt};
use tonic::{Status, Streaming};
use uuid::Uuid;
use krata::idm::internal::Request;
use krata::{
idm::internal::{
exec_stream_request_update::Update, request::Request as IdmRequestType,
response::Response as IdmResponseType, ExecEnvVar, ExecStreamRequestStart,
ExecStreamRequestStdin, ExecStreamRequestTerminalSize, ExecStreamRequestUpdate,
Request as IdmRequest,
},
v1::control::{ExecInsideZoneReply, ExecInsideZoneRequest},
};
use crate::control::ApiError;
use crate::idm::DaemonIdmHandle;
pub struct ExecInsideZoneRpc {
idm: DaemonIdmHandle,
}
impl ExecInsideZoneRpc {
pub fn new(idm: DaemonIdmHandle) -> Self {
Self { idm }
}
pub async fn process(
self,
mut input: Streaming<ExecInsideZoneRequest>,
) -> Result<Pin<Box<dyn Stream<Item = Result<ExecInsideZoneReply, Status>> + Send + 'static>>>
{
let Some(request) = input.next().await else {
return Err(anyhow!("expected to have at least one request"));
};
let request = request?;
let Some(task) = request.task else {
return Err(anyhow!("task is missing"));
};
let uuid = Uuid::from_str(&request.zone_id)?;
let idm = self.idm.client(uuid).await?;
let idm_request = Request {
request: Some(IdmRequestType::ExecStream(ExecStreamRequestUpdate {
update: Some(Update::Start(ExecStreamRequestStart {
environment: task
.environment
.into_iter()
.map(|x| ExecEnvVar {
key: x.key,
value: x.value,
})
.collect(),
command: task.command,
working_directory: task.working_directory,
tty: task.tty,
terminal_size: request.terminal_size.map(|size| {
ExecStreamRequestTerminalSize {
rows: size.rows,
columns: size.columns,
}
}),
})),
})),
};
let output = try_stream! {
let mut handle = idm.send_stream(idm_request).await.map_err(|x| ApiError {
message: x.to_string(),
})?;
loop {
select! {
x = input.next() => if let Some(update) = x {
let update: Result<ExecInsideZoneRequest, Status> = update.map_err(|error| ApiError {
message: error.to_string()
}.into());
if let Ok(update) = update {
if !update.stdin.is_empty() {
let _ = handle.update(IdmRequest {
request: Some(IdmRequestType::ExecStream(ExecStreamRequestUpdate {
update: Some(Update::Stdin(ExecStreamRequestStdin {
data: update.stdin,
closed: update.stdin_closed,
})),
}))}).await;
}
if let Some(ref terminal_size) = update.terminal_size {
let _ = handle.update(IdmRequest {
request: Some(IdmRequestType::ExecStream(ExecStreamRequestUpdate {
update: Some(Update::TerminalResize(ExecStreamRequestTerminalSize {
rows: terminal_size.rows,
columns: terminal_size.columns,
})),
}))}).await;
}
}
},
x = handle.receiver.recv() => match x {
Some(response) => {
let Some(IdmResponseType::ExecStream(update)) = response.response else {
break;
};
let reply = ExecInsideZoneReply {
exited: update.exited,
error: update.error,
exit_code: update.exit_code,
stdout: update.stdout,
stderr: update.stderr,
};
yield reply;
},
None => {
break;
}
}
}
}
};
Ok(Box::pin(output))
}
}

View File

@ -0,0 +1,33 @@
use anyhow::Result;
use krata::v1::control::{GetHostCpuTopologyReply, GetHostCpuTopologyRequest, HostCpuTopologyInfo};
use kratart::Runtime;
pub struct GetHostCpuTopologyRpc {
runtime: Runtime,
}
impl GetHostCpuTopologyRpc {
pub fn new(runtime: Runtime) -> Self {
Self { runtime }
}
pub async fn process(
self,
_request: GetHostCpuTopologyRequest,
) -> Result<GetHostCpuTopologyReply> {
let power = self.runtime.power_management_context().await?;
let cpu_topology = power.cpu_topology().await?;
let mut cpus = vec![];
for cpu in cpu_topology {
cpus.push(HostCpuTopologyInfo {
core: cpu.core,
socket: cpu.socket,
node: cpu.node,
thread: cpu.thread,
class: cpu.class as i32,
})
}
Ok(GetHostCpuTopologyReply { cpus })
}
}

View File

@ -0,0 +1,37 @@
use crate::command::DaemonCommand;
use crate::network::assignment::NetworkAssignment;
use crate::zlt::ZoneLookupTable;
use anyhow::Result;
use krata::v1::control::{GetHostStatusReply, GetHostStatusRequest};
pub struct GetHostStatusRpc {
network: NetworkAssignment,
zlt: ZoneLookupTable,
}
impl GetHostStatusRpc {
pub fn new(ip: NetworkAssignment, zlt: ZoneLookupTable) -> Self {
Self { network: ip, zlt }
}
pub async fn process(self, _request: GetHostStatusRequest) -> Result<GetHostStatusReply> {
let host_reservation = self.network.retrieve(self.zlt.host_uuid()).await?;
Ok(GetHostStatusReply {
host_domid: self.zlt.host_domid(),
host_uuid: self.zlt.host_uuid().to_string(),
krata_version: DaemonCommand::version(),
host_ipv4: host_reservation
.as_ref()
.map(|x| format!("{}/{}", x.ipv4, x.ipv4_prefix))
.unwrap_or_default(),
host_ipv6: host_reservation
.as_ref()
.map(|x| format!("{}/{}", x.ipv6, x.ipv6_prefix))
.unwrap_or_default(),
host_mac: host_reservation
.as_ref()
.map(|x| x.mac.to_string().to_lowercase().replace('-', ":"))
.unwrap_or_default(),
})
}
}

View File

@ -0,0 +1,24 @@
use std::str::FromStr;
use anyhow::Result;
use uuid::Uuid;
use krata::v1::control::{GetZoneReply, GetZoneRequest};
use crate::db::zone::ZoneStore;
pub struct GetZoneRpc {
zones: ZoneStore,
}
impl GetZoneRpc {
pub fn new(zones: ZoneStore) -> Self {
Self { zones }
}
pub async fn process(self, request: GetZoneRequest) -> Result<GetZoneReply> {
let mut zones = self.zones.list().await?;
let zone = zones.remove(&Uuid::from_str(&request.zone_id)?);
Ok(GetZoneReply { zone })
}
}

View File

@ -0,0 +1,28 @@
use anyhow::Result;
use krata::v1::control::{DeviceInfo, ListDevicesReply, ListDevicesRequest};
use crate::devices::DaemonDeviceManager;
pub struct ListDevicesRpc {
devices: DaemonDeviceManager,
}
impl ListDevicesRpc {
pub fn new(devices: DaemonDeviceManager) -> Self {
Self { devices }
}
pub async fn process(self, _request: ListDevicesRequest) -> Result<ListDevicesReply> {
let mut devices = Vec::new();
let state = self.devices.copy().await?;
for (name, state) in state {
devices.push(DeviceInfo {
name,
claimed: state.owner.is_some(),
owner: state.owner.map(|x| x.to_string()).unwrap_or_default(),
});
}
Ok(ListDevicesReply { devices })
}
}

View File

@ -0,0 +1,28 @@
use anyhow::Result;
use krata::v1::{
common::NetworkReservation,
control::{ListNetworkReservationsReply, ListNetworkReservationsRequest},
};
use crate::network::assignment::NetworkAssignment;
pub struct ListNetworkReservationsRpc {
network: NetworkAssignment,
}
impl ListNetworkReservationsRpc {
pub fn new(network: NetworkAssignment) -> Self {
Self { network }
}
pub async fn process(
self,
_request: ListNetworkReservationsRequest,
) -> Result<ListNetworkReservationsReply> {
let state = self.network.read_reservations().await?;
let reservations: Vec<NetworkReservation> =
state.into_values().map(|x| x.into()).collect::<Vec<_>>();
Ok(ListNetworkReservationsReply { reservations })
}
}

View File

@ -0,0 +1,21 @@
use anyhow::Result;
use krata::v1::common::Zone;
use krata::v1::control::{ListZonesReply, ListZonesRequest};
use crate::db::zone::ZoneStore;
pub struct ListZonesRpc {
zones: ZoneStore,
}
impl ListZonesRpc {
pub fn new(zones: ZoneStore) -> Self {
Self { zones }
}
pub async fn process(self, _request: ListZonesRequest) -> Result<ListZonesReply> {
let zones = self.zones.list().await?;
let zones = zones.into_values().collect::<Vec<Zone>>();
Ok(ListZonesReply { zones })
}
}

View File

@ -0,0 +1,365 @@
use std::pin::Pin;
use anyhow::Error;
use futures::Stream;
use list_network_reservations::ListNetworkReservationsRpc;
use tokio::sync::mpsc::Sender;
use tonic::{Request, Response, Status, Streaming};
use uuid::Uuid;
use krata::v1::control::{
control_service_server::ControlService, CreateZoneReply, CreateZoneRequest, DestroyZoneReply,
DestroyZoneRequest, ExecInsideZoneReply, ExecInsideZoneRequest, GetHostCpuTopologyReply,
GetHostCpuTopologyRequest, GetHostStatusReply, GetHostStatusRequest, ListDevicesReply,
ListDevicesRequest, ListZonesReply, ListZonesRequest, PullImageReply, PullImageRequest,
ReadHypervisorConsoleReply, ReadHypervisorConsoleRequest, ReadZoneMetricsReply,
ReadZoneMetricsRequest, ResolveZoneIdReply, ResolveZoneIdRequest, SnoopIdmReply,
SnoopIdmRequest, UpdateZoneResourcesReply, UpdateZoneResourcesRequest, WatchEventsReply,
WatchEventsRequest, ZoneConsoleReply, ZoneConsoleRequest,
};
use krata::v1::control::{
GetZoneReply, GetZoneRequest, ListNetworkReservationsReply, ListNetworkReservationsRequest,
SetHostPowerManagementPolicyReply, SetHostPowerManagementPolicyRequest,
};
use krataoci::packer::service::OciPackerService;
use kratart::Runtime;
use crate::control::attach_zone_console::AttachZoneConsoleRpc;
use crate::control::create_zone::CreateZoneRpc;
use crate::control::destroy_zone::DestroyZoneRpc;
use crate::control::exec_inside_zone::ExecInsideZoneRpc;
use crate::control::get_host_cpu_topology::GetHostCpuTopologyRpc;
use crate::control::get_host_status::GetHostStatusRpc;
use crate::control::get_zone::GetZoneRpc;
use crate::control::list_devices::ListDevicesRpc;
use crate::control::list_zones::ListZonesRpc;
use crate::control::pull_image::PullImageRpc;
use crate::control::read_hypervisor_console::ReadHypervisorConsoleRpc;
use crate::control::read_zone_metrics::ReadZoneMetricsRpc;
use crate::control::resolve_zone_id::ResolveZoneIdRpc;
use crate::control::set_host_power_management_policy::SetHostPowerManagementPolicyRpc;
use crate::control::snoop_idm::SnoopIdmRpc;
use crate::control::update_zone_resources::UpdateZoneResourcesRpc;
use crate::control::watch_events::WatchEventsRpc;
use crate::db::zone::ZoneStore;
use crate::network::assignment::NetworkAssignment;
use crate::{
console::DaemonConsoleHandle, devices::DaemonDeviceManager, event::DaemonEventContext,
idm::DaemonIdmHandle, zlt::ZoneLookupTable,
};
pub mod attach_zone_console;
pub mod create_zone;
pub mod destroy_zone;
pub mod exec_inside_zone;
pub mod get_host_cpu_topology;
pub mod get_host_status;
pub mod get_zone;
pub mod list_devices;
pub mod list_network_reservations;
pub mod list_zones;
pub mod pull_image;
pub mod read_hypervisor_console;
pub mod read_zone_metrics;
pub mod resolve_zone_id;
pub mod set_host_power_management_policy;
pub mod snoop_idm;
pub mod update_zone_resources;
pub mod watch_events;
pub struct ApiError {
message: String,
}
impl From<Error> for ApiError {
fn from(value: Error) -> Self {
ApiError {
message: value.to_string(),
}
}
}
impl From<ApiError> for Status {
fn from(value: ApiError) -> Self {
Status::unknown(value.message)
}
}
#[derive(Clone)]
pub struct DaemonControlService {
zlt: ZoneLookupTable,
devices: DaemonDeviceManager,
events: DaemonEventContext,
console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
zones: ZoneStore,
network: NetworkAssignment,
zone_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
runtime: Runtime,
}
impl DaemonControlService {
#[allow(clippy::too_many_arguments)]
pub fn new(
zlt: ZoneLookupTable,
devices: DaemonDeviceManager,
events: DaemonEventContext,
console: DaemonConsoleHandle,
idm: DaemonIdmHandle,
zones: ZoneStore,
network: NetworkAssignment,
zone_reconciler_notify: Sender<Uuid>,
packer: OciPackerService,
runtime: Runtime,
) -> Self {
Self {
zlt,
devices,
events,
console,
idm,
zones,
network,
zone_reconciler_notify,
packer,
runtime,
}
}
}
#[tonic::async_trait]
impl ControlService for DaemonControlService {
async fn get_host_status(
&self,
request: Request<GetHostStatusRequest>,
) -> Result<Response<GetHostStatusReply>, Status> {
let request = request.into_inner();
adapt(
GetHostStatusRpc::new(self.network.clone(), self.zlt.clone())
.process(request)
.await,
)
}
type SnoopIdmStream =
Pin<Box<dyn Stream<Item = Result<SnoopIdmReply, Status>> + Send + 'static>>;
async fn snoop_idm(
&self,
request: Request<SnoopIdmRequest>,
) -> Result<Response<Self::SnoopIdmStream>, Status> {
let request = request.into_inner();
adapt(
SnoopIdmRpc::new(self.idm.clone(), self.zlt.clone())
.process(request)
.await,
)
}
async fn get_host_cpu_topology(
&self,
request: Request<GetHostCpuTopologyRequest>,
) -> Result<Response<GetHostCpuTopologyReply>, Status> {
let request = request.into_inner();
adapt(
GetHostCpuTopologyRpc::new(self.runtime.clone())
.process(request)
.await,
)
}
async fn set_host_power_management_policy(
&self,
request: Request<SetHostPowerManagementPolicyRequest>,
) -> Result<Response<SetHostPowerManagementPolicyReply>, Status> {
let request = request.into_inner();
adapt(
SetHostPowerManagementPolicyRpc::new(self.runtime.clone())
.process(request)
.await,
)
}
async fn list_devices(
&self,
request: Request<ListDevicesRequest>,
) -> Result<Response<ListDevicesReply>, Status> {
let request = request.into_inner();
adapt(
ListDevicesRpc::new(self.devices.clone())
.process(request)
.await,
)
}
async fn list_network_reservations(
&self,
request: Request<ListNetworkReservationsRequest>,
) -> Result<Response<ListNetworkReservationsReply>, Status> {
let request = request.into_inner();
adapt(
ListNetworkReservationsRpc::new(self.network.clone())
.process(request)
.await,
)
}
type PullImageStream =
Pin<Box<dyn Stream<Item = Result<PullImageReply, Status>> + Send + 'static>>;
async fn pull_image(
&self,
request: Request<PullImageRequest>,
) -> Result<Response<Self::PullImageStream>, Status> {
let request = request.into_inner();
adapt(
PullImageRpc::new(self.packer.clone())
.process(request)
.await,
)
}
async fn create_zone(
&self,
request: Request<CreateZoneRequest>,
) -> Result<Response<CreateZoneReply>, Status> {
let request = request.into_inner();
adapt(
CreateZoneRpc::new(
self.zones.clone(),
self.zlt.clone(),
self.zone_reconciler_notify.clone(),
)
.process(request)
.await,
)
}
async fn destroy_zone(
&self,
request: Request<DestroyZoneRequest>,
) -> Result<Response<DestroyZoneReply>, Status> {
let request = request.into_inner();
adapt(
DestroyZoneRpc::new(self.zones.clone(), self.zone_reconciler_notify.clone())
.process(request)
.await,
)
}
async fn resolve_zone_id(
&self,
request: Request<ResolveZoneIdRequest>,
) -> Result<Response<ResolveZoneIdReply>, Status> {
let request = request.into_inner();
adapt(
ResolveZoneIdRpc::new(self.zones.clone())
.process(request)
.await,
)
}
async fn get_zone(
&self,
request: Request<GetZoneRequest>,
) -> Result<Response<GetZoneReply>, Status> {
let request = request.into_inner();
adapt(GetZoneRpc::new(self.zones.clone()).process(request).await)
}
async fn update_zone_resources(
&self,
request: Request<UpdateZoneResourcesRequest>,
) -> Result<Response<UpdateZoneResourcesReply>, Status> {
let request = request.into_inner();
adapt(
UpdateZoneResourcesRpc::new(self.runtime.clone(), self.zones.clone())
.process(request)
.await,
)
}
async fn list_zones(
&self,
request: Request<ListZonesRequest>,
) -> Result<Response<ListZonesReply>, Status> {
let request = request.into_inner();
adapt(ListZonesRpc::new(self.zones.clone()).process(request).await)
}
type AttachZoneConsoleStream =
Pin<Box<dyn Stream<Item = Result<ZoneConsoleReply, Status>> + Send + 'static>>;
async fn attach_zone_console(
&self,
request: Request<Streaming<ZoneConsoleRequest>>,
) -> Result<Response<Self::AttachZoneConsoleStream>, Status> {
let input = request.into_inner();
adapt(
AttachZoneConsoleRpc::new(self.console.clone())
.process(input)
.await,
)
}
type ExecInsideZoneStream =
Pin<Box<dyn Stream<Item = Result<ExecInsideZoneReply, Status>> + Send + 'static>>;
async fn exec_inside_zone(
&self,
request: Request<Streaming<ExecInsideZoneRequest>>,
) -> Result<Response<Self::ExecInsideZoneStream>, Status> {
let input = request.into_inner();
adapt(
ExecInsideZoneRpc::new(self.idm.clone())
.process(input)
.await,
)
}
async fn read_zone_metrics(
&self,
request: Request<ReadZoneMetricsRequest>,
) -> Result<Response<ReadZoneMetricsReply>, Status> {
let request = request.into_inner();
adapt(
ReadZoneMetricsRpc::new(self.idm.clone())
.process(request)
.await,
)
}
type WatchEventsStream =
Pin<Box<dyn Stream<Item = Result<WatchEventsReply, Status>> + Send + 'static>>;
async fn watch_events(
&self,
request: Request<WatchEventsRequest>,
) -> Result<Response<Self::WatchEventsStream>, Status> {
let request = request.into_inner();
adapt(
WatchEventsRpc::new(self.events.clone())
.process(request)
.await,
)
}
async fn read_hypervisor_console(
&self,
request: Request<ReadHypervisorConsoleRequest>,
) -> Result<Response<ReadHypervisorConsoleReply>, Status> {
let request = request.into_inner();
adapt(
ReadHypervisorConsoleRpc::new(self.runtime.clone())
.process(request)
.await,
)
}
}
fn adapt<T>(result: anyhow::Result<T>) -> Result<Response<T>, Status> {
result
.map(Response::new)
.map_err(|error| Status::unknown(error.to_string()))
}

View File

@ -0,0 +1,100 @@
use crate::control::ApiError;
use crate::oci::convert_oci_progress;
use anyhow::Result;
use async_stream::try_stream;
use krata::v1::common::OciImageFormat;
use krata::v1::control::{PullImageReply, PullImageRequest};
use krataoci::name::ImageName;
use krataoci::packer::service::OciPackerService;
use krataoci::packer::{OciPackedFormat, OciPackedImage};
use krataoci::progress::{OciProgress, OciProgressContext};
use std::pin::Pin;
use tokio::select;
use tokio::task::JoinError;
use tokio_stream::Stream;
use tonic::Status;
enum PullImageSelect {
Progress(Option<OciProgress>),
Completed(Result<Result<OciPackedImage, anyhow::Error>, JoinError>),
}
pub struct PullImageRpc {
packer: OciPackerService,
}
impl PullImageRpc {
pub fn new(packer: OciPackerService) -> Self {
Self { packer }
}
pub async fn process(
self,
request: PullImageRequest,
) -> Result<Pin<Box<dyn Stream<Item = Result<PullImageReply, Status>> + Send + 'static>>> {
let name = ImageName::parse(&request.image)?;
let format = match request.format() {
OciImageFormat::Unknown => OciPackedFormat::Squashfs,
OciImageFormat::Squashfs => OciPackedFormat::Squashfs,
OciImageFormat::Erofs => OciPackedFormat::Erofs,
OciImageFormat::Tar => OciPackedFormat::Tar,
};
let (context, mut receiver) = OciProgressContext::create();
let our_packer = self.packer;
let output = try_stream! {
let mut task = tokio::task::spawn(async move {
our_packer.request(name, format, request.overwrite_cache, request.update, context).await
});
let abort_handle = task.abort_handle();
let _task_cancel_guard = scopeguard::guard(abort_handle, |handle| {
handle.abort();
});
loop {
let what = select! {
x = receiver.changed() => match x {
Ok(_) => PullImageSelect::Progress(Some(receiver.borrow_and_update().clone())),
Err(_) => PullImageSelect::Progress(None),
},
x = &mut task => PullImageSelect::Completed(x),
};
match what {
PullImageSelect::Progress(Some(progress)) => {
let reply = PullImageReply {
progress: Some(convert_oci_progress(progress)),
digest: String::new(),
format: OciImageFormat::Unknown.into(),
};
yield reply;
},
PullImageSelect::Completed(result) => {
let result = result.map_err(|err| ApiError {
message: err.to_string(),
})?;
let packed = result.map_err(|err| ApiError {
message: err.to_string(),
})?;
let reply = PullImageReply {
progress: None,
digest: packed.digest,
format: match packed.format {
OciPackedFormat::Squashfs => OciImageFormat::Squashfs.into(),
OciPackedFormat::Erofs => OciImageFormat::Erofs.into(),
OciPackedFormat::Tar => OciImageFormat::Tar.into(),
},
};
yield reply;
break;
},
_ => {
continue;
}
}
}
};
Ok(Box::pin(output))
}
}

View File

@ -0,0 +1,23 @@
use anyhow::Result;
use krata::v1::control::{ReadHypervisorConsoleReply, ReadHypervisorConsoleRequest};
use kratart::Runtime;
pub struct ReadHypervisorConsoleRpc {
runtime: Runtime,
}
impl ReadHypervisorConsoleRpc {
pub fn new(runtime: Runtime) -> Self {
Self { runtime }
}
pub async fn process(
self,
_: ReadHypervisorConsoleRequest,
) -> Result<ReadHypervisorConsoleReply> {
let data = self.runtime.read_hypervisor_console(false).await?;
Ok(ReadHypervisorConsoleReply {
data: data.to_string(),
})
}
}

View File

@ -0,0 +1,40 @@
use std::str::FromStr;
use anyhow::Result;
use uuid::Uuid;
use krata::idm::internal::MetricsRequest;
use krata::idm::internal::{
request::Request as IdmRequestType, response::Response as IdmResponseType,
Request as IdmRequest,
};
use krata::v1::control::{ReadZoneMetricsReply, ReadZoneMetricsRequest};
use crate::idm::DaemonIdmHandle;
use crate::metrics::idm_metric_to_api;
pub struct ReadZoneMetricsRpc {
idm: DaemonIdmHandle,
}
impl ReadZoneMetricsRpc {
pub fn new(idm: DaemonIdmHandle) -> Self {
Self { idm }
}
pub async fn process(self, request: ReadZoneMetricsRequest) -> Result<ReadZoneMetricsReply> {
let uuid = Uuid::from_str(&request.zone_id)?;
let client = self.idm.client(uuid).await?;
let response = client
.send(IdmRequest {
request: Some(IdmRequestType::Metrics(MetricsRequest {})),
})
.await?;
let mut reply = ReadZoneMetricsReply::default();
if let Some(IdmResponseType::Metrics(metrics)) = response.response {
reply.root = metrics.root.map(idm_metric_to_api);
}
Ok(reply)
}
}

View File

@ -0,0 +1,30 @@
use anyhow::Result;
use krata::v1::common::Zone;
use krata::v1::control::{ResolveZoneIdReply, ResolveZoneIdRequest};
use crate::db::zone::ZoneStore;
pub struct ResolveZoneIdRpc {
zones: ZoneStore,
}
impl ResolveZoneIdRpc {
pub fn new(zones: ZoneStore) -> Self {
Self { zones }
}
pub async fn process(self, request: ResolveZoneIdRequest) -> Result<ResolveZoneIdReply> {
let zones = self.zones.list().await?;
let zones = zones
.into_values()
.filter(|x| {
let comparison_spec = x.spec.as_ref().cloned().unwrap_or_default();
(!request.name.is_empty() && comparison_spec.name == request.name)
|| x.id == request.name
})
.collect::<Vec<Zone>>();
Ok(ResolveZoneIdReply {
zone_id: zones.first().cloned().map(|x| x.id).unwrap_or_default(),
})
}
}

View File

@ -0,0 +1,25 @@
use anyhow::Result;
use krata::v1::control::{SetHostPowerManagementPolicyReply, SetHostPowerManagementPolicyRequest};
use kratart::Runtime;
pub struct SetHostPowerManagementPolicyRpc {
runtime: Runtime,
}
impl SetHostPowerManagementPolicyRpc {
pub fn new(runtime: Runtime) -> Self {
Self { runtime }
}
pub async fn process(
self,
request: SetHostPowerManagementPolicyRequest,
) -> Result<SetHostPowerManagementPolicyReply> {
let power = self.runtime.power_management_context().await?;
let scheduler = &request.scheduler;
power.set_smt_policy(request.smt_awareness).await?;
power.set_scheduler_policy(scheduler).await?;
Ok(SetHostPowerManagementPolicyReply {})
}
}

View File

@ -0,0 +1,39 @@
use crate::idm::DaemonIdmHandle;
use crate::zlt::ZoneLookupTable;
use anyhow::Result;
use async_stream::try_stream;
use krata::v1::control::{SnoopIdmReply, SnoopIdmRequest};
use std::pin::Pin;
use tokio_stream::Stream;
use tonic::Status;
pub struct SnoopIdmRpc {
idm: DaemonIdmHandle,
zlt: ZoneLookupTable,
}
impl SnoopIdmRpc {
pub fn new(idm: DaemonIdmHandle, zlt: ZoneLookupTable) -> Self {
Self { idm, zlt }
}
pub async fn process(
self,
_request: SnoopIdmRequest,
) -> Result<Pin<Box<dyn Stream<Item = Result<SnoopIdmReply, Status>> + Send + 'static>>> {
let mut messages = self.idm.snoop();
let zlt = self.zlt.clone();
let output = try_stream! {
while let Ok(event) = messages.recv().await {
let Some(from_uuid) = zlt.lookup_uuid_by_domid(event.from).await else {
continue;
};
let Some(to_uuid) = zlt.lookup_uuid_by_domid(event.to).await else {
continue;
};
yield SnoopIdmReply { from: from_uuid.to_string(), to: to_uuid.to_string(), packet: Some(event.packet) };
}
};
Ok(Box::pin(output))
}
}

View File

@ -0,0 +1,82 @@
use std::str::FromStr;
use anyhow::{anyhow, Result};
use uuid::Uuid;
use krata::v1::common::{ZoneResourceStatus, ZoneState};
use krata::v1::control::{UpdateZoneResourcesReply, UpdateZoneResourcesRequest};
use kratart::Runtime;
use crate::db::zone::ZoneStore;
pub struct UpdateZoneResourcesRpc {
runtime: Runtime,
zones: ZoneStore,
}
impl UpdateZoneResourcesRpc {
pub fn new(runtime: Runtime, zones: ZoneStore) -> Self {
Self { runtime, zones }
}
pub async fn process(
self,
request: UpdateZoneResourcesRequest,
) -> Result<UpdateZoneResourcesReply> {
let uuid = Uuid::from_str(&request.zone_id)?;
let Some(mut zone) = self.zones.read(uuid).await? else {
return Err(anyhow!("zone not found"));
};
let Some(ref mut status) = zone.status else {
return Err(anyhow!("zone state not available"));
};
if status.state() != ZoneState::Created {
return Err(anyhow!("zone is in an invalid state"));
}
if status.domid == 0 || status.domid == u32::MAX {
return Err(anyhow!("zone domid is invalid"));
}
let mut resources = request.resources.unwrap_or_default();
if resources.target_memory > resources.max_memory {
resources.max_memory = resources.target_memory;
}
if resources.target_cpus < 1 {
resources.target_cpus = 1;
}
let initial_resources = zone
.spec
.clone()
.unwrap_or_default()
.initial_resources
.unwrap_or_default();
if resources.target_cpus > initial_resources.max_cpus {
resources.target_cpus = initial_resources.max_cpus;
}
resources.max_cpus = initial_resources.max_cpus;
self.runtime
.set_memory_resources(
status.domid,
resources.target_memory * 1024 * 1024,
resources.max_memory * 1024 * 1024,
)
.await
.map_err(|error| anyhow!("failed to set memory resources: {}", error))?;
self.runtime
.set_cpu_resources(status.domid, resources.target_cpus)
.await
.map_err(|error| anyhow!("failed to set cpu resources: {}", error))?;
status.resource_status = Some(ZoneResourceStatus {
active_resources: Some(resources),
});
self.zones.update(uuid, zone).await?;
Ok(UpdateZoneResourcesReply {})
}
}

View File

@ -0,0 +1,31 @@
use crate::event::DaemonEventContext;
use anyhow::Result;
use async_stream::try_stream;
use krata::v1::control::{WatchEventsReply, WatchEventsRequest};
use std::pin::Pin;
use tokio_stream::Stream;
use tonic::Status;
pub struct WatchEventsRpc {
events: DaemonEventContext,
}
impl WatchEventsRpc {
pub fn new(events: DaemonEventContext) -> Self {
Self { events }
}
pub async fn process(
self,
_request: WatchEventsRequest,
) -> Result<Pin<Box<dyn Stream<Item = Result<WatchEventsReply, Status>> + Send + 'static>>>
{
let mut events = self.events.subscribe();
let output = try_stream! {
while let Ok(event) = events.recv().await {
yield WatchEventsReply { event: Some(event), };
}
};
Ok(Box::pin(output))
}
}

View File

@ -3,7 +3,7 @@ use redb::Database;
use std::path::Path; use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
pub mod ip; pub mod network;
pub mod zone; pub mod zone;
#[derive(Clone)] #[derive(Clone)]

View File

@ -1,6 +1,7 @@
use crate::db::KrataDatabase; use crate::db::KrataDatabase;
use advmac::MacAddr6; use advmac::MacAddr6;
use anyhow::Result; use anyhow::Result;
use krata::v1::common::NetworkReservation as ApiNetworkReservation;
use log::error; use log::error;
use redb::{ReadableTable, TableDefinition}; use redb::{ReadableTable, TableDefinition};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@ -8,24 +9,25 @@ use std::collections::HashMap;
use std::net::{Ipv4Addr, Ipv6Addr}; use std::net::{Ipv4Addr, Ipv6Addr};
use uuid::Uuid; use uuid::Uuid;
const IP_RESERVATION_TABLE: TableDefinition<u128, &[u8]> = TableDefinition::new("ip-reservation"); const NETWORK_RESERVATION_TABLE: TableDefinition<u128, &[u8]> =
TableDefinition::new("network-reservation");
#[derive(Clone)] #[derive(Clone)]
pub struct IpReservationStore { pub struct NetworkReservationStore {
db: KrataDatabase, db: KrataDatabase,
} }
impl IpReservationStore { impl NetworkReservationStore {
pub fn open(db: KrataDatabase) -> Result<Self> { pub fn open(db: KrataDatabase) -> Result<Self> {
let write = db.database.begin_write()?; let write = db.database.begin_write()?;
let _ = write.open_table(IP_RESERVATION_TABLE); let _ = write.open_table(NETWORK_RESERVATION_TABLE);
write.commit()?; write.commit()?;
Ok(IpReservationStore { db }) Ok(NetworkReservationStore { db })
} }
pub async fn read(&self, id: Uuid) -> Result<Option<IpReservation>> { pub async fn read(&self, id: Uuid) -> Result<Option<NetworkReservation>> {
let read = self.db.database.begin_read()?; let read = self.db.database.begin_read()?;
let table = read.open_table(IP_RESERVATION_TABLE)?; let table = read.open_table(NETWORK_RESERVATION_TABLE)?;
let Some(entry) = table.get(id.to_u128_le())? else { let Some(entry) = table.get(id.to_u128_le())? else {
return Ok(None); return Ok(None);
}; };
@ -33,26 +35,26 @@ impl IpReservationStore {
Ok(Some(serde_json::from_slice(bytes)?)) Ok(Some(serde_json::from_slice(bytes)?))
} }
pub async fn list(&self) -> Result<HashMap<Uuid, IpReservation>> { pub async fn list(&self) -> Result<HashMap<Uuid, NetworkReservation>> {
enum ListEntry { enum ListEntry {
Valid(Uuid, IpReservation), Valid(Uuid, NetworkReservation),
Invalid(Uuid), Invalid(Uuid),
} }
let mut reservations: HashMap<Uuid, IpReservation> = HashMap::new(); let mut reservations: HashMap<Uuid, NetworkReservation> = HashMap::new();
let corruptions = { let corruptions = {
let read = self.db.database.begin_read()?; let read = self.db.database.begin_read()?;
let table = read.open_table(IP_RESERVATION_TABLE)?; let table = read.open_table(NETWORK_RESERVATION_TABLE)?;
table table
.iter()? .iter()?
.flat_map(|result| { .flat_map(|result| {
result.map(|(key, value)| { result.map(|(key, value)| {
let uuid = Uuid::from_u128_le(key.value()); let uuid = Uuid::from_u128_le(key.value());
match serde_json::from_slice::<IpReservation>(value.value()) { match serde_json::from_slice::<NetworkReservation>(value.value()) {
Ok(reservation) => ListEntry::Valid(uuid, reservation), Ok(reservation) => ListEntry::Valid(uuid, reservation),
Err(error) => { Err(error) => {
error!( error!(
"found invalid ip reservation in database for uuid {}: {}", "found invalid network reservation in database for uuid {}: {}",
uuid, error uuid, error
); );
ListEntry::Invalid(uuid) ListEntry::Invalid(uuid)
@ -73,7 +75,7 @@ impl IpReservationStore {
if !corruptions.is_empty() { if !corruptions.is_empty() {
let write = self.db.database.begin_write()?; let write = self.db.database.begin_write()?;
let mut table = write.open_table(IP_RESERVATION_TABLE)?; let mut table = write.open_table(NETWORK_RESERVATION_TABLE)?;
for corruption in corruptions { for corruption in corruptions {
table.remove(corruption.to_u128_le())?; table.remove(corruption.to_u128_le())?;
} }
@ -82,10 +84,10 @@ impl IpReservationStore {
Ok(reservations) Ok(reservations)
} }
pub async fn update(&self, id: Uuid, entry: IpReservation) -> Result<()> { pub async fn update(&self, id: Uuid, entry: NetworkReservation) -> Result<()> {
let write = self.db.database.begin_write()?; let write = self.db.database.begin_write()?;
{ {
let mut table = write.open_table(IP_RESERVATION_TABLE)?; let mut table = write.open_table(NETWORK_RESERVATION_TABLE)?;
let bytes = serde_json::to_vec(&entry)?; let bytes = serde_json::to_vec(&entry)?;
table.insert(id.to_u128_le(), bytes.as_slice())?; table.insert(id.to_u128_le(), bytes.as_slice())?;
} }
@ -96,7 +98,7 @@ impl IpReservationStore {
pub async fn remove(&self, id: Uuid) -> Result<()> { pub async fn remove(&self, id: Uuid) -> Result<()> {
let write = self.db.database.begin_write()?; let write = self.db.database.begin_write()?;
{ {
let mut table = write.open_table(IP_RESERVATION_TABLE)?; let mut table = write.open_table(NETWORK_RESERVATION_TABLE)?;
table.remove(id.to_u128_le())?; table.remove(id.to_u128_le())?;
} }
write.commit()?; write.commit()?;
@ -105,7 +107,7 @@ impl IpReservationStore {
} }
#[derive(Serialize, Deserialize, Clone, Debug)] #[derive(Serialize, Deserialize, Clone, Debug)]
pub struct IpReservation { pub struct NetworkReservation {
pub uuid: String, pub uuid: String,
pub ipv4: Ipv4Addr, pub ipv4: Ipv4Addr,
pub ipv6: Ipv6Addr, pub ipv6: Ipv6Addr,
@ -116,3 +118,17 @@ pub struct IpReservation {
pub gateway_ipv6: Ipv6Addr, pub gateway_ipv6: Ipv6Addr,
pub gateway_mac: MacAddr6, pub gateway_mac: MacAddr6,
} }
impl From<NetworkReservation> for ApiNetworkReservation {
fn from(val: NetworkReservation) -> Self {
ApiNetworkReservation {
uuid: val.uuid,
ipv4: format!("{}/{}", val.ipv4, val.ipv4_prefix),
ipv6: format!("{}/{}", val.ipv6, val.ipv6_prefix),
mac: val.mac.to_string().to_lowercase().replace('-', ":"),
gateway_ipv4: format!("{}/{}", val.gateway_ipv4, val.ipv4_prefix),
gateway_ipv6: format!("{}/{}", val.gateway_ipv6, val.ipv6_prefix),
gateway_mac: val.gateway_mac.to_string().to_lowercase().replace('-', ":"),
}
}
}

View File

@ -1,7 +1,7 @@
use crate::db::ip::IpReservationStore; use crate::db::network::NetworkReservationStore;
use crate::db::zone::ZoneStore; use crate::db::zone::ZoneStore;
use crate::db::KrataDatabase; use crate::db::KrataDatabase;
use crate::ip::assignment::IpAssignment; use crate::network::assignment::NetworkAssignment;
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Result};
use config::DaemonConfig; use config::DaemonConfig;
use console::{DaemonConsole, DaemonConsoleHandle}; use console::{DaemonConsole, DaemonConsoleHandle};
@ -16,6 +16,7 @@ use kratart::Runtime;
use log::{debug, info}; use log::{debug, info};
use reconcile::zone::ZoneReconciler; use reconcile::zone::ZoneReconciler;
use std::path::Path; use std::path::Path;
use std::time::Duration;
use std::{net::SocketAddr, path::PathBuf, str::FromStr, sync::Arc}; use std::{net::SocketAddr, path::PathBuf, str::FromStr, sync::Arc};
use tokio::{ use tokio::{
fs, fs,
@ -36,8 +37,8 @@ pub mod db;
pub mod devices; pub mod devices;
pub mod event; pub mod event;
pub mod idm; pub mod idm;
pub mod ip;
pub mod metrics; pub mod metrics;
pub mod network;
pub mod oci; pub mod oci;
pub mod reconcile; pub mod reconcile;
pub mod zlt; pub mod zlt;
@ -48,7 +49,7 @@ pub struct Daemon {
zlt: ZoneLookupTable, zlt: ZoneLookupTable,
devices: DaemonDeviceManager, devices: DaemonDeviceManager,
zones: ZoneStore, zones: ZoneStore,
ip: IpAssignment, network: NetworkAssignment,
events: DaemonEventContext, events: DaemonEventContext,
zone_reconciler_task: JoinHandle<()>, zone_reconciler_task: JoinHandle<()>,
zone_reconciler_notify: Sender<Uuid>, zone_reconciler_notify: Sender<Uuid>,
@ -127,9 +128,14 @@ impl Daemon {
let runtime_for_reconciler = runtime.dupe().await?; let runtime_for_reconciler = runtime.dupe().await?;
let ipv4_network = Ipv4Network::from_str(&config.network.ipv4.subnet)?; let ipv4_network = Ipv4Network::from_str(&config.network.ipv4.subnet)?;
let ipv6_network = Ipv6Network::from_str(&config.network.ipv6.subnet)?; let ipv6_network = Ipv6Network::from_str(&config.network.ipv6.subnet)?;
let ip_reservation_store = IpReservationStore::open(database)?; let network_reservation_store = NetworkReservationStore::open(database)?;
let ip = let network = NetworkAssignment::new(
IpAssignment::new(host_uuid, ipv4_network, ipv6_network, ip_reservation_store).await?; host_uuid,
ipv4_network,
ipv6_network,
network_reservation_store,
)
.await?;
debug!("initializing zone reconciler"); debug!("initializing zone reconciler");
let zone_reconciler = ZoneReconciler::new( let zone_reconciler = ZoneReconciler::new(
devices.clone(), devices.clone(),
@ -142,7 +148,7 @@ impl Daemon {
kernel_path, kernel_path,
initrd_path, initrd_path,
addons_path, addons_path,
ip.clone(), network.clone(),
config.clone(), config.clone(),
)?; )?;
@ -165,7 +171,7 @@ impl Daemon {
zlt, zlt,
devices, devices,
zones, zones,
ip, network,
events, events,
zone_reconciler_task, zone_reconciler_task,
zone_reconciler_notify, zone_reconciler_notify,
@ -186,7 +192,7 @@ impl Daemon {
self.console.clone(), self.console.clone(),
self.idm.clone(), self.idm.clone(),
self.zones.clone(), self.zones.clone(),
self.ip.clone(), self.network.clone(),
self.zone_reconciler_notify.clone(), self.zone_reconciler_notify.clone(),
self.packer.clone(), self.packer.clone(),
self.runtime.clone(), self.runtime.clone(),
@ -209,6 +215,8 @@ impl Daemon {
server = server.tls_config(tls_config)?; server = server.tls_config(tls_config)?;
} }
server = server.http2_keepalive_interval(Some(Duration::from_secs(10)));
let server = server.add_service(ControlServiceServer::new(control_service)); let server = server.add_service(ControlServiceServer::new(control_service));
info!("listening on address {}", addr); info!("listening on address {}", addr);
match addr { match addr {

View File

@ -9,37 +9,37 @@ use std::{
use tokio::sync::RwLock; use tokio::sync::RwLock;
use uuid::Uuid; use uuid::Uuid;
use crate::db::ip::{IpReservation, IpReservationStore}; use crate::db::network::{NetworkReservation, NetworkReservationStore};
#[derive(Default, Clone)] #[derive(Default, Clone)]
pub struct IpAssignmentState { pub struct NetworkAssignmentState {
pub ipv4: HashMap<Ipv4Addr, IpReservation>, pub ipv4: HashMap<Ipv4Addr, NetworkReservation>,
pub ipv6: HashMap<Ipv6Addr, IpReservation>, pub ipv6: HashMap<Ipv6Addr, NetworkReservation>,
} }
#[derive(Clone)] #[derive(Clone)]
pub struct IpAssignment { pub struct NetworkAssignment {
ipv4_network: Ipv4Network, ipv4_network: Ipv4Network,
ipv6_network: Ipv6Network, ipv6_network: Ipv6Network,
gateway_ipv4: Ipv4Addr, gateway_ipv4: Ipv4Addr,
gateway_ipv6: Ipv6Addr, gateway_ipv6: Ipv6Addr,
gateway_mac: MacAddr6, gateway_mac: MacAddr6,
store: IpReservationStore, store: NetworkReservationStore,
state: Arc<RwLock<IpAssignmentState>>, state: Arc<RwLock<NetworkAssignmentState>>,
} }
impl IpAssignment { impl NetworkAssignment {
pub async fn new( pub async fn new(
host_uuid: Uuid, host_uuid: Uuid,
ipv4_network: Ipv4Network, ipv4_network: Ipv4Network,
ipv6_network: Ipv6Network, ipv6_network: Ipv6Network,
store: IpReservationStore, store: NetworkReservationStore,
) -> Result<Self> { ) -> Result<Self> {
let mut state = IpAssignment::fetch_current_state(&store).await?; let mut state = NetworkAssignment::fetch_current_state(&store).await?;
let gateway_reservation = if let Some(reservation) = store.read(Uuid::nil()).await? { let gateway_reservation = if let Some(reservation) = store.read(Uuid::nil()).await? {
reservation reservation
} else { } else {
IpAssignment::allocate( NetworkAssignment::allocate(
&mut state, &mut state,
&store, &store,
Uuid::nil(), Uuid::nil(),
@ -53,7 +53,7 @@ impl IpAssignment {
}; };
if store.read(host_uuid).await?.is_none() { if store.read(host_uuid).await?.is_none() {
let _ = IpAssignment::allocate( let _ = NetworkAssignment::allocate(
&mut state, &mut state,
&store, &store,
host_uuid, host_uuid,
@ -66,7 +66,7 @@ impl IpAssignment {
.await?; .await?;
} }
let assignment = IpAssignment { let assignment = NetworkAssignment {
ipv4_network, ipv4_network,
ipv6_network, ipv6_network,
gateway_ipv4: gateway_reservation.ipv4, gateway_ipv4: gateway_reservation.ipv4,
@ -78,9 +78,11 @@ impl IpAssignment {
Ok(assignment) Ok(assignment)
} }
async fn fetch_current_state(store: &IpReservationStore) -> Result<IpAssignmentState> { async fn fetch_current_state(
store: &NetworkReservationStore,
) -> Result<NetworkAssignmentState> {
let reservations = store.list().await?; let reservations = store.list().await?;
let mut state = IpAssignmentState::default(); let mut state = NetworkAssignmentState::default();
for reservation in reservations.values() { for reservation in reservations.values() {
state.ipv4.insert(reservation.ipv4, reservation.clone()); state.ipv4.insert(reservation.ipv4, reservation.clone());
state.ipv6.insert(reservation.ipv6, reservation.clone()); state.ipv6.insert(reservation.ipv6, reservation.clone());
@ -90,15 +92,15 @@ impl IpAssignment {
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
async fn allocate( async fn allocate(
state: &mut IpAssignmentState, state: &mut NetworkAssignmentState,
store: &IpReservationStore, store: &NetworkReservationStore,
uuid: Uuid, uuid: Uuid,
ipv4_network: Ipv4Network, ipv4_network: Ipv4Network,
ipv6_network: Ipv6Network, ipv6_network: Ipv6Network,
gateway_ipv4: Option<Ipv4Addr>, gateway_ipv4: Option<Ipv4Addr>,
gateway_ipv6: Option<Ipv6Addr>, gateway_ipv6: Option<Ipv6Addr>,
gateway_mac: Option<MacAddr6>, gateway_mac: Option<MacAddr6>,
) -> Result<IpReservation> { ) -> Result<NetworkReservation> {
let found_ipv4: Option<Ipv4Addr> = ipv4_network let found_ipv4: Option<Ipv4Addr> = ipv4_network
.iter() .iter()
.filter(|ip| { .filter(|ip| {
@ -136,7 +138,7 @@ impl IpAssignment {
mac.set_local(true); mac.set_local(true);
mac.set_multicast(false); mac.set_multicast(false);
let reservation = IpReservation { let reservation = NetworkReservation {
uuid: uuid.to_string(), uuid: uuid.to_string(),
ipv4, ipv4,
ipv6, ipv6,
@ -153,9 +155,9 @@ impl IpAssignment {
Ok(reservation) Ok(reservation)
} }
pub async fn assign(&self, uuid: Uuid) -> Result<IpReservation> { pub async fn assign(&self, uuid: Uuid) -> Result<NetworkReservation> {
let mut state = self.state.write().await; let mut state = self.state.write().await;
let reservation = IpAssignment::allocate( let reservation = NetworkAssignment::allocate(
&mut state, &mut state,
&self.store, &self.store,
uuid, uuid,
@ -181,18 +183,22 @@ impl IpAssignment {
Ok(()) Ok(())
} }
pub async fn retrieve(&self, uuid: Uuid) -> Result<Option<IpReservation>> { pub async fn retrieve(&self, uuid: Uuid) -> Result<Option<NetworkReservation>> {
self.store.read(uuid).await self.store.read(uuid).await
} }
pub async fn reload(&self) -> Result<()> { pub async fn reload(&self) -> Result<()> {
let mut state = self.state.write().await; let mut state = self.state.write().await;
let intermediate = IpAssignment::fetch_current_state(&self.store).await?; let intermediate = NetworkAssignment::fetch_current_state(&self.store).await?;
*state = intermediate; *state = intermediate;
Ok(()) Ok(())
} }
pub async fn read(&self) -> Result<IpAssignmentState> { pub async fn read(&self) -> Result<NetworkAssignmentState> {
Ok(self.state.read().await.clone()) Ok(self.state.read().await.clone())
} }
pub async fn read_reservations(&self) -> Result<HashMap<Uuid, NetworkReservation>> {
self.store.list().await
}
} }

View File

@ -14,8 +14,8 @@ use std::sync::atomic::{AtomicBool, Ordering};
use crate::config::{DaemonConfig, DaemonPciDeviceRdmReservePolicy}; use crate::config::{DaemonConfig, DaemonPciDeviceRdmReservePolicy};
use crate::devices::DaemonDeviceManager; use crate::devices::DaemonDeviceManager;
use crate::ip::assignment::IpAssignment; use crate::network::assignment::NetworkAssignment;
use crate::reconcile::zone::ip_reservation_to_network_status; use crate::reconcile::zone::network_reservation_to_network_status;
use crate::{reconcile::zone::ZoneReconcilerResult, zlt::ZoneLookupTable}; use crate::{reconcile::zone::ZoneReconcilerResult, zlt::ZoneLookupTable};
use krata::v1::common::zone_image_spec::Image; use krata::v1::common::zone_image_spec::Image;
use tokio::fs::{self, File}; use tokio::fs::{self, File};
@ -29,7 +29,7 @@ pub struct ZoneCreator<'a> {
pub initrd_path: &'a Path, pub initrd_path: &'a Path,
pub addons_path: &'a Path, pub addons_path: &'a Path,
pub packer: &'a OciPackerService, pub packer: &'a OciPackerService,
pub ip_assignment: &'a IpAssignment, pub network_assignment: &'a NetworkAssignment,
pub zlt: &'a ZoneLookupTable, pub zlt: &'a ZoneLookupTable,
pub runtime: &'a Runtime, pub runtime: &'a Runtime,
pub config: &'a DaemonConfig, pub config: &'a DaemonConfig,
@ -174,7 +174,7 @@ impl ZoneCreator<'_> {
} }
} }
let reservation = self.ip_assignment.assign(uuid).await?; let reservation = self.network_assignment.assign(uuid).await?;
let mut initial_resources = spec.initial_resources.unwrap_or_default(); let mut initial_resources = spec.initial_resources.unwrap_or_default();
if initial_resources.target_cpus < 1 { if initial_resources.target_cpus < 1 {
@ -236,7 +236,7 @@ impl ZoneCreator<'_> {
info!("created zone {}", uuid); info!("created zone {}", uuid);
zone.status = Some(ZoneStatus { zone.status = Some(ZoneStatus {
state: ZoneState::Created.into(), state: ZoneState::Created.into(),
network_status: Some(ip_reservation_to_network_status(&reservation)), network_status: Some(network_reservation_to_network_status(&reservation)),
exit_status: None, exit_status: None,
error_status: None, error_status: None,
resource_status: Some(ZoneResourceStatus { resource_status: Some(ZoneResourceStatus {

View File

@ -7,8 +7,8 @@ use std::{
use self::create::ZoneCreator; use self::create::ZoneCreator;
use crate::config::DaemonConfig; use crate::config::DaemonConfig;
use crate::db::ip::IpReservation; use crate::db::network::NetworkReservation;
use crate::ip::assignment::IpAssignment; use crate::network::assignment::NetworkAssignment;
use crate::{ use crate::{
db::zone::ZoneStore, db::zone::ZoneStore,
devices::DaemonDeviceManager, devices::DaemonDeviceManager,
@ -27,7 +27,7 @@ use tokio::{
select, select,
sync::{ sync::{
mpsc::{channel, Receiver, Sender}, mpsc::{channel, Receiver, Sender},
Mutex, RwLock, RwLock,
}, },
task::JoinHandle, task::JoinHandle,
time::sleep, time::sleep,
@ -45,16 +45,9 @@ enum ZoneReconcilerResult {
} }
struct ZoneReconcilerEntry { struct ZoneReconcilerEntry {
task: JoinHandle<()>,
sender: Sender<()>, sender: Sender<()>,
} }
impl Drop for ZoneReconcilerEntry {
fn drop(&mut self) {
self.task.abort();
}
}
#[derive(Clone)] #[derive(Clone)]
pub struct ZoneReconciler { pub struct ZoneReconciler {
devices: DaemonDeviceManager, devices: DaemonDeviceManager,
@ -66,10 +59,10 @@ pub struct ZoneReconciler {
kernel_path: PathBuf, kernel_path: PathBuf,
initrd_path: PathBuf, initrd_path: PathBuf,
addons_path: PathBuf, addons_path: PathBuf,
tasks: Arc<Mutex<HashMap<Uuid, ZoneReconcilerEntry>>>, tasks: Arc<RwLock<HashMap<Uuid, ZoneReconcilerEntry>>>,
zone_reconciler_notify: Sender<Uuid>, zone_reconciler_notify: Sender<Uuid>,
zone_reconcile_lock: Arc<RwLock<()>>, zone_reconcile_lock: Arc<RwLock<()>>,
ip_assignment: IpAssignment, ip_assignment: NetworkAssignment,
config: Arc<DaemonConfig>, config: Arc<DaemonConfig>,
} }
@ -86,7 +79,7 @@ impl ZoneReconciler {
kernel_path: PathBuf, kernel_path: PathBuf,
initrd_path: PathBuf, initrd_path: PathBuf,
modules_path: PathBuf, modules_path: PathBuf,
ip_assignment: IpAssignment, ip_assignment: NetworkAssignment,
config: Arc<DaemonConfig>, config: Arc<DaemonConfig>,
) -> Result<Self> { ) -> Result<Self> {
Ok(Self { Ok(Self {
@ -99,7 +92,7 @@ impl ZoneReconciler {
kernel_path, kernel_path,
initrd_path, initrd_path,
addons_path: modules_path, addons_path: modules_path,
tasks: Arc::new(Mutex::new(HashMap::new())), tasks: Arc::new(RwLock::new(HashMap::new())),
zone_reconciler_notify, zone_reconciler_notify,
zone_reconcile_lock: Arc::new(RwLock::with_max_readers((), PARALLEL_LIMIT)), zone_reconcile_lock: Arc::new(RwLock::with_max_readers((), PARALLEL_LIMIT)),
ip_assignment, ip_assignment,
@ -125,7 +118,7 @@ impl ZoneReconciler {
error!("failed to start zone reconciler task {}: {}", uuid, error); error!("failed to start zone reconciler task {}: {}", uuid, error);
} }
let map = self.tasks.lock().await; let map = self.tasks.read().await;
if let Some(entry) = map.get(&uuid) { if let Some(entry) = map.get(&uuid) {
if let Err(error) = entry.sender.send(()).await { if let Err(error) = entry.sender.send(()).await {
error!("failed to notify zone reconciler task {}: {}", uuid, error); error!("failed to notify zone reconciler task {}: {}", uuid, error);
@ -202,7 +195,7 @@ impl ZoneReconciler {
if let Some(reservation) = self.ip_assignment.retrieve(uuid).await? { if let Some(reservation) = self.ip_assignment.retrieve(uuid).await? {
status.network_status = status.network_status =
Some(ip_reservation_to_network_status(&reservation)); Some(network_reservation_to_network_status(&reservation));
} }
stored_zone.status = Some(status); stored_zone.status = Some(status);
} }
@ -271,7 +264,7 @@ impl ZoneReconciler {
if destroyed { if destroyed {
self.zones.remove(uuid).await?; self.zones.remove(uuid).await?;
let mut map = self.tasks.lock().await; let mut map = self.tasks.write().await;
map.remove(&uuid); map.remove(&uuid);
} else { } else {
self.zones.update(uuid, zone.clone()).await?; self.zones.update(uuid, zone.clone()).await?;
@ -293,7 +286,7 @@ impl ZoneReconciler {
initrd_path: &self.initrd_path, initrd_path: &self.initrd_path,
addons_path: &self.addons_path, addons_path: &self.addons_path,
packer: &self.packer, packer: &self.packer,
ip_assignment: &self.ip_assignment, network_assignment: &self.ip_assignment,
zlt: &self.zlt, zlt: &self.zlt,
runtime: &self.runtime, runtime: &self.runtime,
config: &self.config, config: &self.config,
@ -337,7 +330,7 @@ impl ZoneReconciler {
} }
async fn launch_task_if_needed(&self, uuid: Uuid) -> Result<()> { async fn launch_task_if_needed(&self, uuid: Uuid) -> Result<()> {
let mut map = self.tasks.lock().await; let mut map = self.tasks.write().await;
match map.entry(uuid) { match map.entry(uuid) {
Entry::Occupied(_) => {} Entry::Occupied(_) => {}
Entry::Vacant(entry) => { Entry::Vacant(entry) => {
@ -350,7 +343,7 @@ impl ZoneReconciler {
async fn launch_task(&self, uuid: Uuid) -> Result<ZoneReconcilerEntry> { async fn launch_task(&self, uuid: Uuid) -> Result<ZoneReconcilerEntry> {
let this = self.clone(); let this = self.clone();
let (sender, mut receiver) = channel(10); let (sender, mut receiver) = channel(10);
let task = tokio::task::spawn(async move { tokio::task::spawn(async move {
'notify_loop: loop { 'notify_loop: loop {
if receiver.recv().await.is_none() { if receiver.recv().await.is_none() {
break 'notify_loop; break 'notify_loop;
@ -372,11 +365,11 @@ impl ZoneReconciler {
} }
} }
}); });
Ok(ZoneReconcilerEntry { task, sender }) Ok(ZoneReconcilerEntry { sender })
} }
} }
pub fn ip_reservation_to_network_status(ip: &IpReservation) -> ZoneNetworkStatus { pub fn network_reservation_to_network_status(ip: &NetworkReservation) -> ZoneNetworkStatus {
ZoneNetworkStatus { ZoneNetworkStatus {
zone_ipv4: format!("{}/{}", ip.ipv4, ip.ipv4_prefix), zone_ipv4: format!("{}/{}", ip.ipv4, ip.ipv4_prefix),
zone_ipv6: format!("{}/{}", ip.ipv6, ip.ipv6_prefix), zone_ipv6: format!("{}/{}", ip.ipv6, ip.ipv6_prefix),

View File

@ -46,6 +46,7 @@ message ExecStreamRequestStart {
repeated string command = 2; repeated string command = 2;
string working_directory = 3; string working_directory = 3;
bool tty = 4; bool tty = 4;
ExecStreamRequestTerminalSize terminal_size = 5;
} }
message ExecStreamRequestStdin { message ExecStreamRequestStdin {
@ -53,10 +54,16 @@ message ExecStreamRequestStdin {
bool closed = 2; bool closed = 2;
} }
message ExecStreamRequestTerminalSize {
uint32 rows = 1;
uint32 columns = 2;
}
message ExecStreamRequestUpdate { message ExecStreamRequestUpdate {
oneof update { oneof update {
ExecStreamRequestStart start = 1; ExecStreamRequestStart start = 1;
ExecStreamRequestStdin stdin = 2; ExecStreamRequestStdin stdin = 2;
ExecStreamRequestTerminalSize terminal_resize = 3;
} }
} }

View File

@ -134,3 +134,18 @@ enum ZoneMetricFormat {
ZONE_METRIC_FORMAT_INTEGER = 2; ZONE_METRIC_FORMAT_INTEGER = 2;
ZONE_METRIC_FORMAT_DURATION_SECONDS = 3; ZONE_METRIC_FORMAT_DURATION_SECONDS = 3;
} }
message TerminalSize {
uint32 rows = 1;
uint32 columns = 2;
}
message NetworkReservation {
string uuid = 1;
string ipv4 = 2;
string ipv6 = 3;
string mac = 4;
string gateway_ipv4 = 5;
string gateway_ipv6 = 6;
string gateway_mac = 7;
}

View File

@ -10,13 +10,15 @@ import "krata/idm/transport.proto";
import "krata/v1/common.proto"; import "krata/v1/common.proto";
service ControlService { service ControlService {
rpc HostStatus(HostStatusRequest) returns (HostStatusReply); rpc GetHostStatus(GetHostStatusRequest) returns (GetHostStatusReply);
rpc SnoopIdm(SnoopIdmRequest) returns (stream SnoopIdmReply); rpc SnoopIdm(SnoopIdmRequest) returns (stream SnoopIdmReply);
rpc GetHostCpuTopology(GetHostCpuTopologyRequest) returns (GetHostCpuTopologyReply); rpc GetHostCpuTopology(GetHostCpuTopologyRequest) returns (GetHostCpuTopologyReply);
rpc SetHostPowerManagementPolicy(SetHostPowerManagementPolicyRequest) returns (SetHostPowerManagementPolicyReply); rpc SetHostPowerManagementPolicy(SetHostPowerManagementPolicyRequest) returns (SetHostPowerManagementPolicyReply);
rpc ListDevices(ListDevicesRequest) returns (ListDevicesReply); rpc ListDevices(ListDevicesRequest) returns (ListDevicesReply);
rpc ListNetworkReservations(ListNetworkReservationsRequest) returns (ListNetworkReservationsReply);
rpc PullImage(PullImageRequest) returns (stream PullImageReply); rpc PullImage(PullImageRequest) returns (stream PullImageReply);
rpc CreateZone(CreateZoneRequest) returns (CreateZoneReply); rpc CreateZone(CreateZoneRequest) returns (CreateZoneReply);
@ -39,9 +41,9 @@ service ControlService {
rpc ReadHypervisorConsole(ReadHypervisorConsoleRequest) returns (ReadHypervisorConsoleReply); rpc ReadHypervisorConsole(ReadHypervisorConsoleRequest) returns (ReadHypervisorConsoleReply);
} }
message HostStatusRequest {} message GetHostStatusRequest {}
message HostStatusReply { message GetHostStatusReply {
string host_uuid = 1; string host_uuid = 1;
uint32 host_domid = 2; uint32 host_domid = 2;
string krata_version = 3; string krata_version = 3;
@ -91,6 +93,7 @@ message ExecInsideZoneRequest {
krata.v1.common.ZoneTaskSpec task = 2; krata.v1.common.ZoneTaskSpec task = 2;
bytes stdin = 3; bytes stdin = 3;
bool stdin_closed = 4; bool stdin_closed = 4;
krata.v1.common.TerminalSize terminal_size = 5;
} }
message ExecInsideZoneReply { message ExecInsideZoneReply {
@ -104,6 +107,7 @@ message ExecInsideZoneReply {
message ZoneConsoleRequest { message ZoneConsoleRequest {
string zone_id = 1; string zone_id = 1;
bytes data = 2; bytes data = 2;
bool replay_history = 3;
} }
message ZoneConsoleReply { message ZoneConsoleReply {
@ -263,3 +267,9 @@ message ReadHypervisorConsoleRequest {}
message ReadHypervisorConsoleReply { message ReadHypervisorConsoleReply {
string data = 1; string data = 1;
} }
message ListNetworkReservationsRequest {}
message ListNetworkReservationsReply {
repeated krata.v1.common.NetworkReservation reservations = 1;
}

View File

@ -495,6 +495,7 @@ impl<R: IdmRequest, E: IdmSerializable> IdmClient<R, E> {
IdmTransportPacketForm::StreamRequestClosed => { IdmTransportPacketForm::StreamRequestClosed => {
let mut update_streams = request_update_streams.lock().await; let mut update_streams = request_update_streams.lock().await;
update_streams.remove(&packet.id); update_streams.remove(&packet.id);
println!("stream request closed: {}", packet.id);
} }
IdmTransportPacketForm::StreamResponseUpdate => { IdmTransportPacketForm::StreamResponseUpdate => {

View File

@ -16,7 +16,7 @@ clap = { workspace = true }
env_logger = { workspace = true } env_logger = { workspace = true }
etherparse = { workspace = true } etherparse = { workspace = true }
futures = { workspace = true } futures = { workspace = true }
krata = { path = "../krata", version = "^0.0.18" } krata = { path = "../krata", version = "^0.0.20" }
krata-advmac = { workspace = true } krata-advmac = { workspace = true }
libc = { workspace = true } libc = { workspace = true }
log = { workspace = true } log = { workspace = true }

View File

@ -9,7 +9,7 @@ use krata::{
dial::ControlDialAddress, dial::ControlDialAddress,
v1::{ v1::{
common::Zone, common::Zone,
control::{control_service_client::ControlServiceClient, HostStatusRequest}, control::{control_service_client::ControlServiceClient, GetHostStatusRequest},
}, },
}; };
use log::warn; use log::warn;
@ -47,7 +47,7 @@ impl NetworkService {
pub async fn new(control_address: ControlDialAddress) -> Result<NetworkService> { pub async fn new(control_address: ControlDialAddress) -> Result<NetworkService> {
let mut control = ControlClientProvider::dial(control_address).await?; let mut control = ControlClientProvider::dial(control_address).await?;
let host_status = control let host_status = control
.host_status(Request::new(HostStatusRequest {})) .get_host_status(Request::new(GetHostStatusRequest {}))
.await? .await?
.into_inner(); .into_inner();
let host_ipv4 = Ipv4Cidr::from_str(&host_status.host_ipv4) let host_ipv4 = Ipv4Cidr::from_str(&host_status.host_ipv4)

View File

@ -12,20 +12,20 @@ resolver = "2"
anyhow = { workspace = true } anyhow = { workspace = true }
backhand = { workspace = true } backhand = { workspace = true }
ipnetwork = { workspace = true } ipnetwork = { workspace = true }
krata = { path = "../krata", version = "^0.0.18" } krata = { path = "../krata", version = "^0.0.20" }
krata-advmac = { workspace = true } krata-advmac = { workspace = true }
krata-oci = { path = "../oci", version = "^0.0.18" } krata-oci = { path = "../oci", version = "^0.0.20" }
log = { workspace = true } log = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
uuid = { workspace = true } uuid = { workspace = true }
krata-loopdev = { path = "../loopdev", version = "^0.0.18" } krata-loopdev = { path = "../loopdev", version = "^0.0.20" }
krata-xencall = { path = "../xen/xencall", version = "^0.0.18" } krata-xencall = { path = "../xen/xencall", version = "^0.0.20" }
krata-xenclient = { path = "../xen/xenclient", version = "^0.0.18" } krata-xenclient = { path = "../xen/xenclient", version = "^0.0.20" }
krata-xenevtchn = { path = "../xen/xenevtchn", version = "^0.0.18" } krata-xenevtchn = { path = "../xen/xenevtchn", version = "^0.0.20" }
krata-xengnt = { path = "../xen/xengnt", version = "^0.0.18" } krata-xengnt = { path = "../xen/xengnt", version = "^0.0.20" }
krata-xenplatform = { path = "../xen/xenplatform", version = "^0.0.18" } krata-xenplatform = { path = "../xen/xenplatform", version = "^0.0.20" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.18" } krata-xenstore = { path = "../xen/xenstore", version = "^0.0.20" }
walkdir = { workspace = true } walkdir = { workspace = true }
indexmap = { workspace = true } indexmap = { workspace = true }

View File

@ -13,9 +13,9 @@ async-trait = { workspace = true }
indexmap = { workspace = true } indexmap = { workspace = true }
libc = { workspace = true } libc = { workspace = true }
log = { workspace = true } log = { workspace = true }
krata-xencall = { path = "../xencall", version = "^0.0.18" } krata-xencall = { path = "../xencall", version = "^0.0.20" }
krata-xenplatform = { path = "../xenplatform", version = "^0.0.18" } krata-xenplatform = { path = "../xenplatform", version = "^0.0.20" }
krata-xenstore = { path = "../xenstore", version = "^0.0.18" } krata-xenstore = { path = "../xenstore", version = "^0.0.20" }
regex = { workspace = true } regex = { workspace = true }
thiserror = { workspace = true } thiserror = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }

View File

@ -16,7 +16,7 @@ flate2 = { workspace = true }
indexmap = { workspace = true } indexmap = { workspace = true }
libc = { workspace = true } libc = { workspace = true }
log = { workspace = true } log = { workspace = true }
krata-xencall = { path = "../xencall", version = "^0.0.18" } krata-xencall = { path = "../xencall", version = "^0.0.20" }
memchr = { workspace = true } memchr = { workspace = true }
nix = { workspace = true } nix = { workspace = true }
regex = { workspace = true } regex = { workspace = true }

View File

@ -14,8 +14,8 @@ cgroups-rs = { workspace = true }
env_logger = { workspace = true } env_logger = { workspace = true }
futures = { workspace = true } futures = { workspace = true }
ipnetwork = { workspace = true } ipnetwork = { workspace = true }
krata = { path = "../krata", version = "^0.0.18" } krata = { path = "../krata", version = "^0.0.20" }
krata-xenstore = { path = "../xen/xenstore", version = "^0.0.18" } krata-xenstore = { path = "../xen/xenstore", version = "^0.0.20" }
libc = { workspace = true } libc = { workspace = true }
log = { workspace = true } log = { workspace = true }
nix = { workspace = true, features = ["ioctl", "process", "fs"] } nix = { workspace = true, features = ["ioctl", "process", "fs"] }
@ -29,6 +29,7 @@ serde_json = { workspace = true }
sys-mount = { workspace = true } sys-mount = { workspace = true }
sysinfo = { workspace = true } sysinfo = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
tokio-util = { workspace = true }
[lib] [lib]
name = "kratazone" name = "kratazone"

View File

@ -1,5 +1,6 @@
use std::{collections::HashMap, process::Stdio}; use std::{collections::HashMap, process::Stdio};
use crate::childwait::ChildWait;
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Result};
use krata::idm::{ use krata::idm::{
client::IdmClientStreamResponseHandle, client::IdmClientStreamResponseHandle,
@ -16,9 +17,9 @@ use tokio::{
io::{AsyncReadExt, AsyncWriteExt}, io::{AsyncReadExt, AsyncWriteExt},
join, join,
process::Command, process::Command,
select,
}; };
use tokio_util::sync::CancellationToken;
use crate::childwait::ChildWait;
pub struct ZoneExecTask { pub struct ZoneExecTask {
pub wait: ChildWait, pub wait: ChildWait,
@ -69,17 +70,26 @@ impl ZoneExecTask {
let code: c_int; let code: c_int;
if start.tty { if start.tty {
let pty = Pty::new().map_err(|error| anyhow!("unable to allocate pty: {}", error))?; let pty = Pty::new().map_err(|error| anyhow!("unable to allocate pty: {}", error))?;
pty.resize(Size::new(24, 80))?; let size = start
let mut child = ChildDropGuard { .terminal_size
inner: pty_process::Command::new(exe) .map(|x| Size::new(x.rows as u16, x.columns as u16))
.unwrap_or_else(|| Size::new(24, 80));
pty.resize(size)?;
let pts = pty
.pts()
.map_err(|error| anyhow!("unable to allocate pts: {}", error))?;
let child = std::panic::catch_unwind(move || {
let pts = pts;
pty_process::Command::new(exe)
.args(cmd) .args(cmd)
.envs(env) .envs(env)
.current_dir(dir) .current_dir(dir)
.spawn( .spawn(&pts)
&pty.pts() })
.map_err(|error| anyhow!("unable to allocate pts: {}", error))?, .map_err(|_| anyhow!("internal error"))
) .map_err(|error| anyhow!("failed to spawn: {}", error))??;
.map_err(|error| anyhow!("failed to spawn: {}", error))?, let mut child = ChildDropGuard {
inner: child,
kill: true, kill: true,
}; };
let pid = child let pid = child
@ -111,9 +121,12 @@ impl ZoneExecTask {
} }
}); });
let cancel = CancellationToken::new();
let stdin_cancel = cancel.clone();
let stdin_task = tokio::task::spawn(async move { let stdin_task = tokio::task::spawn(async move {
loop { loop {
let Some(request) = receiver.recv().await else { let Some(request) = receiver.recv().await else {
stdin_cancel.cancel();
break; break;
}; };
@ -121,43 +134,67 @@ impl ZoneExecTask {
continue; continue;
}; };
let Some(Update::Stdin(update)) = update.update else { match update.update {
continue; Some(Update::Stdin(update)) => {
}; if !update.data.is_empty()
&& write.write_all(&update.data).await.is_err()
{
break;
}
if !update.data.is_empty() && write.write_all(&update.data).await.is_err() { if update.closed {
break; break;
} }
}
if update.closed { Some(Update::TerminalResize(size)) => {
break; let _ = write.resize(Size::new(size.rows as u16, size.columns as u16));
}
_ => {
continue;
}
} }
} }
}); });
code = loop { code = loop {
if let Ok(event) = wait_subscription.recv().await { select! {
if event.pid.as_raw() as u32 == pid { result = wait_subscription.recv() => match result {
break event.status; Ok(event) => {
if event.pid.as_raw() as u32 == pid {
child.kill = false;
break event.status;
}
}
_ => {
child.inner.start_kill()?;
child.kill = false;
break -1;
}
},
_ = cancel.cancelled() => {
child.inner.start_kill()?;
child.kill = false;
break -1;
} }
} }
}; };
child.kill = false;
let _ = join!(pty_read_task); let _ = join!(pty_read_task);
stdin_task.abort(); stdin_task.abort();
} else { } else {
let mut child = Command::new(exe) let mut child = std::panic::catch_unwind(|| {
.args(cmd) Command::new(exe)
.envs(env) .args(cmd)
.current_dir(dir) .envs(env)
.stdin(Stdio::piped()) .current_dir(dir)
.stdout(Stdio::piped()) .stdin(Stdio::piped())
.stderr(Stdio::piped()) .stdout(Stdio::piped())
.kill_on_drop(true) .stderr(Stdio::piped())
.spawn() .kill_on_drop(true)
.map_err(|error| anyhow!("failed to spawn: {}", error))?; .spawn()
})
.map_err(|_| anyhow!("internal error"))
.map_err(|error| anyhow!("failed to spawn: {}", error))??;
let pid = child.id().ok_or_else(|| anyhow!("pid is not provided"))?; let pid = child.id().ok_or_else(|| anyhow!("pid is not provided"))?;
let mut stdin = child let mut stdin = child
@ -221,9 +258,12 @@ impl ZoneExecTask {
} }
}); });
let cancel = CancellationToken::new();
let stdin_cancel = cancel.clone();
let stdin_task = tokio::task::spawn(async move { let stdin_task = tokio::task::spawn(async move {
loop { loop {
let Some(request) = receiver.recv().await else { let Some(request) = receiver.recv().await else {
stdin_cancel.cancel();
break; break;
}; };
@ -247,9 +287,21 @@ impl ZoneExecTask {
}); });
code = loop { code = loop {
if let Ok(event) = wait_subscription.recv().await { select! {
if event.pid.as_raw() as u32 == pid { result = wait_subscription.recv() => match result {
break event.status; Ok(event) => {
if event.pid.as_raw() as u32 == pid {
break event.status;
}
}
_ => {
child.start_kill()?;
break -1;
}
},
_ = cancel.cancelled() => {
child.start_kill()?;
break -1;
} }
} }
}; };

View File

@ -94,10 +94,10 @@ It is possible to copy these options into a `.config` file and then use
`make olddefconfig` to build the rest of the kernel configuration, which `make olddefconfig` to build the rest of the kernel configuration, which
you can then use to build a kernel as desired. you can then use to build a kernel as desired.
The [kernels][edera-kernels] repository provides some example configurations The [linux-kernel-oci][edera-linux-kernel-oci] repository provides some example configurations
and can generate a Dockerfile which will build a kernel image. and can generate a Dockerfile which will build a kernel image.
[edera-kernels]: https://github.com/edera-dev/kernels [edera-linux-kernel-oci]: https://github.com/edera-dev/linux-kernel-oci
Minimum requirements for a host kernel Minimum requirements for a host kernel
-------------------------------------- --------------------------------------

View File

@ -16,4 +16,4 @@ fi
export TARGET_ARCH export TARGET_ARCH
TARGET_ARCH="" TARGET_LIBC="" RUST_TARGET="${HOST_RUST_TARGET}" ./hack/build/cargo.sh build -q --bin build-fetch-kernel TARGET_ARCH="" TARGET_LIBC="" RUST_TARGET="${HOST_RUST_TARGET}" ./hack/build/cargo.sh build -q --bin build-fetch-kernel
exec "target/${HOST_RUST_TARGET}/debug/build-fetch-kernel" "ghcr.io/edera-dev/kernels:latest" exec "target/${HOST_RUST_TARGET}/debug/build-fetch-kernel" "ghcr.io/edera-dev/linux-kernel:latest"