Skip to content

Commit

Permalink
docs: add liqoctl info documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
claudiolor authored and cheina97 committed Oct 3, 2024
1 parent 20d9d8f commit 0c8c92d
Show file tree
Hide file tree
Showing 9 changed files with 181 additions and 25 deletions.
8 changes: 5 additions & 3 deletions docs/advanced/peering/inter-cluster-authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,20 +259,20 @@ Once the Identity resource is correctly applied, the clusters are able to negoti
All in all, these are the steps to be followed by the administrators of each of the clusters to manually complete the authentication process:

1. **Cluster provider**: creates the nonce to be provided to the **cluster consumer** administrator:

```bash
liqoctl create nonce --remote-cluster-id $CLUSTER_CONSUMER_ID
liqoctl get nonce --remote-cluster-id $CLUSTER_CONSUMER_ID > nonce.txt
```

2. **Cluster consumer**: generates the `Tenant` resource to be applied by the **cluster provider**:

```bash
liqoctl generate tenant --remote-cluster-id $CLUSTER_PROVIDER_ID --nonce $(cat nonce.txt) > tenant.yaml
```

3. **Cluster provider**: applies `tenant.yaml` and generates the `Identity` resource to be applied by the consumer:

```bash
kubectl apply -f tenant.yaml
liqoctl generate identity --remote-cluster-id $CLUSTER_CONSUMER_ID > identity.yaml
Expand All @@ -283,3 +283,5 @@ All in all, these are the steps to be followed by the administrators of each of
```bash
kubectl apply -f identity.yaml
```

You can check whether the procedure completed successfully by checking [the peering status](../../usage/peer.md#check-status-of-peerings).
31 changes: 27 additions & 4 deletions docs/advanced/peering/inter-cluster-network.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,14 +227,23 @@ spec:
pod: 10.243.0.0/16 # the pod CIDR of the remote cluster
```
You can find *REMOTE_CLUSTER_ID* these parameters in the output of the
You can find the value of the *REMOTE_CLUSTER_ID* by launching the following command on the **remote cluster**:
`````{tab-set}
````{tab-item} liqoctl

```bash
liqoctl info --get clusterid
```
````
````{tab-item} kubectl

```bash
kubectl get configmaps -n liqo liqo-clusterid-configmap \
--template {{.data.CLUSTER_ID}}
```

command in the remote cluster.
````
`````

```{admonition} Tip
You can generate this file with the command `liqoctl generate configuration` executed in the remote cluster.
Expand Down Expand Up @@ -291,13 +300,25 @@ NAMESPACE NAME TEMPLATE NAME IP PORT AGE
default server wireguard-server 10.42.3.54 32133 84s
```
`````{tab-set}
````{tab-item} liqoctl
```bash
kubectl get gatewayservers --template {{.status.endpoint}}
liqoctl info peer <REMOTE_CLUSTER_ID> --get network.gateway
```
````

````{tab-item} kubectl
```bash
kubectl get gatewayservers --template {{.status.endpoint}} -n <GATEWAY_NS> <GATEWAY_NAME>
```
```text
map[addresses:[172.19.0.9] port:32701 protocol:UDP]
```
````
`````
#### Creation of a gateway client
Expand Down Expand Up @@ -475,6 +496,8 @@ Resuming, these are the steps to be followed by the administrators of each of th
kubectl apply -f publickey-client.yaml
```
You can check whether the procedure completed successfully by checking [the peering status](../../usage/peer.md#check-status-of-peerings).
## Custom templates
Gateway resources (i.e., `GatewayServer` and `GatewayClient`) contain a reference to the template CR implementing the inter-cluster network technology.
Expand Down
16 changes: 10 additions & 6 deletions docs/advanced/peering/offloading-in-depth.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ To add other resources like `ephemeral-storage`, `gpu` or any other custom resou
:caption: "Cluster consumer"
kubectl get resourceslices.authentication.liqo.io -A
```

```text
NAMESPACE NAME AUTHENTICATION RESOURCES AGE
liqo-tenant-cool-firefly mypool Accepted Accepted 19s
Expand All @@ -80,7 +80,7 @@ At the same time, in the **provider cluster**, a `Quota` will be created to limi
:caption: "Cluster provider"
kubectl get quotas.offloading.liqo.io -A
```

```text
NAMESPACE NAME ENFORCEMENT CORDONED AGE
liqo-tenant-wispy-firefly mypool-c34af51dd912 None 36s
Expand All @@ -92,7 +92,7 @@ After a few seconds, in the **consumer cluster**, a new `VirtualNode` will be cr
:caption: "Cluster consumer"
kubectl get virtualnodes.offloading.liqo.io -A
```

```text
NAMESPACE NAME CLUSTERID CREATE NODE AGE
liqo-tenant-cool-firefly mypool cool-firefly true 59s
Expand All @@ -104,7 +104,7 @@ A new `Node` will be available in the consumer cluster with the name `mypool` pr
:caption: "Cluster consumer"
kubectl get node
```

```text
NAME STATUS ROLES AGE VERSION
cluster-1-control-plane-fsvkj Ready control-plane 30m v1.27.4
Expand Down Expand Up @@ -177,7 +177,7 @@ This command will create a `VirtualNode` named `mynode` in the consumer cluster,
:caption: "Cluster consumer"
kubectl get virtualnodes.offloading.liqo.io -A
```

```text
NAMESPACE NAME CLUSTERID CREATE NODE AGE
liqo-tenant-cool-firefly mynode cool-firefly true 7s
Expand All @@ -189,7 +189,7 @@ A new `Node` will be available in the consumer cluster with the name `mynode` pr
:caption: "Cluster consumer"
kubectl get node
```

```text
NAME STATUS ROLES AGE VERSION
cluster-1-control-plane-fsvkj Ready control-plane 52m v1.27.4
Expand Down Expand Up @@ -258,6 +258,10 @@ metadata:
type: Opaque
```
### Check shared resources and virtual nodes
Via `liqoctl` it is possible to check the amount of shared resources and the virtual nodes configured for a specific peerings looking at [the peering status](../../usage/peer.md#check-status-of-peerings).

### Delete VirtualNode

You can revert the process by deleting the `VirtualNode` in the consumer cluster.
Expand Down
4 changes: 3 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@
myst_enable_extensions = [
"substitution",
]
# Enable slug generation for headings to reference them in markdown links
myst_heading_anchors = 3

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
Expand Down Expand Up @@ -208,7 +210,7 @@ def generate_liqoctl_install(platform: str, arch: str) -> str:
curl --fail -LS \"{file}\" | tar -xz\n\
sudo install -o root -g root -m 0755 liqoctl /usr/local/bin/liqoctl\n\
```\n"

def generate_helm_install() -> str:
version=generate_semantic_version()
return f"```bash\n\
Expand Down
62 changes: 59 additions & 3 deletions docs/examples/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,28 @@ liqo-proxy-599958d9b8-6fzfc 1/1 Running 0 8m15s
liqo-webhook-8fbd8c664-pxrfh 1/1 Running 0 8m15s
```

At this point, it is possible to check status and info about the current Liqo instance, runnning:

```bash
liqoctl info
```

```text
─ Local installation info ────────────────────────────────────────────────────────
Cluster ID: milan
Version: v1.0.0-rc.2
K8s API server: https://172.19.0.10:6443
Cluster labels
liqo.io/provider: kind
──────────────────────────────────────────────────────────────────────────────────
─ Installation health ────────────────────────────────────────────────────────────
✔ Liqo is healthy
──────────────────────────────────────────────────────────────────────────────────
─ Active peerings ────────────────────────────────────────────────────────────────
──────────────────────────────────────────────────────────────────────────────────
```

## Peer two clusters

Once Liqo is installed in your clusters, you can establish new *peerings*.
Expand Down Expand Up @@ -182,17 +204,51 @@ The output should look like this:
You can check the peering status by running:

```bash
kubectl get foreignclusters
liqoctl info
```

The output should look like the following, indicating the relationship the foreign cluster has with the local cluster:
Where in the output you should be able to see that a new peer appeared in the "Active peerings" section:

```text
─ Local installation info ────────────────────────────────────────────────────────
Cluster ID: rome
Version: v1.0.0-rc.2
K8s API server: https://172.19.0.9:6443
Cluster labels
liqo.io/provider: kind
──────────────────────────────────────────────────────────────────────────────────
─ Installation health ────────────────────────────────────────────────────────────
✔ Liqo is healthy
──────────────────────────────────────────────────────────────────────────────────
─ Active peerings ────────────────────────────────────────────────────────────────
milan
Role: Provider
Networking status: Healthy
Authentication status: Healthy
Offloading status: Healthy
──────────────────────────────────────────────────────────────────────────────────
```

````{admonition} Tip
To get additional info about the specific peering you can run:
```bash
liqoctl info peer milan
```
````

Additionally, you should be able to see a new CR describing the relationship with the foreign cluster:

```bash
kubectl get foreignclusters
```

```text
NAME ROLE AGE
milan Provider 52s
```

At the same time, you should see a virtual node (`milan`) in addition to your physical nodes:
Moreover, you should be able to see a new virtual node (`milan`) among the list of nodes in the cluster:

```bash
kubectl get nodes
Expand Down
4 changes: 2 additions & 2 deletions docs/examples/service-offloading.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,14 +111,14 @@ Let's now consume the Service from both clusters from a different pod (e.g., a t
Starting from the *London* cluster:

```bash
kubectl run consumer --image=curlimages/curl --rm --restart=Never \
kubectl run consumer -it --image=curlimages/curl --rm --restart=Never \
-- curl -s -H 'accept: application/json' http://flights-service.liqo-demo:7999/schedule
```

A similar result is obtained executing the same command in a shell running in the *New York* cluster, although the backend pod is effectively running in the *London* cluster:

```bash
kubectl run consumer --image=curlimages/curl --rm --restart=Never \
kubectl run consumer -it --image=curlimages/curl --rm --restart=Never \
--kubeconfig $KUBECONFIG_NEWYORK \
-- curl -s -H 'accept: application/json' http://flights-service.liqo-demo:7999/schedule
```
Expand Down
2 changes: 1 addition & 1 deletion docs/features/network-fabric.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The figure below represents at a high level the network fabric established betwe
The **controller-manager** (not shown in the figure) contains the **control plane** of the Liqo network fabric.
It runs as a pod (**liqo-controller-manager**) and is responsible for **setting up the network CRDs** during the connection process to a remote cluster.
This includes the management of potential **network conflicts** through the definition of high-level NAT rules (enforced by the data plane components).
Specifically, network CRDs are used to handle the [Translation of Pod IPs] (usageReflectionPods) (i.e. during the synchronisation process from the remote to the local cluster), as well as during the [EndpointSlices reflection] (usageReflectionEndpointSlices) (i.e. propagation from the local to the remote cluster).
Specifically, network CRDs are used to handle the [Translation of Pod IPs](usageReflectionPods) (i.e. during the synchronisation process from the remote to the local cluster), as well as during the [EndpointSlices reflection](usageReflectionEndpointSlices) (i.e. propagation from the local to the remote cluster).

An **IP Address Management (IPAM) plugin** is included in another pod (**liqo-ipam**).
It exposes an interface that is consumed by the **controller-manager** to handle **IPs acquisitions**.
Expand Down
34 changes: 32 additions & 2 deletions docs/installation/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ If the private cluster uses private link, you can set the `--private-link` *liqo
```{admonition} Virtual Network Resource Group
By default, it is assumed the Virtual Network Resource for the AKS Subnet is located in the same Resource Group
as the AKS Resource. If that is not the case, you will need to use the `--vnet-resource-group-name` flag to provide the
as the AKS Resource. If that is not the case, you will need to use the `--vnet-resource-group-name` flag to provide the
correct Resource Group name where the Virtual Network Resource is located.
```
````
Expand Down Expand Up @@ -201,7 +201,7 @@ Liqo supports GKE clusters using the default CNI: [Google GKE - VPC-Native](http
Liqo does NOT support:
* GKE Autopilot Clusters
* Intranode visibility: make sure this option is disabled or use the `--no-enable-intra-node-visibility` flag.
* Intranode visibility: make sure this option is disabled or use the `--no-enable-intra-node-visibility` flag.
* Accessing offloaded pods from NodePort/LoadBalancer services [**only on Dataplane V2**].
```
Expand Down Expand Up @@ -484,6 +484,36 @@ liqoctl install <provider> --version <commit-sha> --local-chart-path <path-to-lo

(InstallationCNIConfiguration)=

## Check installation

After the installation, you can check the status and info about the current instance of Liqo via `liqoctl`:

```bash
liqoctl info
```

The info command is a good way to check the:

* Health of the installation
* Current configuration
* Status and info about active peerings

By default the output is presented in a human-readable form.
However, to simplify automate retrieval of the data, via the `-o` option it is possible to format the output in **JSON or YAML format**.
Moreover via the `--get field.subfield` argument, each field of the reports can be individually retrieved.

For example:

```{code-block} bash
:caption: Get the output in JSON format
liqoctl info -o json
```

```{code-block} bash
:caption: Get the podCIDR of the local Liqo instance
liqoctl info --get network.podcidr
```

## CNIs

### Cilium
Expand Down
Loading

0 comments on commit 0c8c92d

Please sign in to comment.