Skip to content

Commit 6e474de

Browse files
authored
Feedback from translators on images (#795) (#798)
Signed-off-by: Ranjini M N <[email protected]>
1 parent f3ab1ac commit 6e474de

17 files changed

+47
-47
lines changed

asciidoc/components/akri.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -115,20 +115,20 @@ See <<components-rancher-dashboard-extensions>> for installation guidance.
115115

116116
Once the extension is installed you can navigate to any Akri-enabled managed cluster using cluster explorer. Under *Akri* navigation group you can see *Configurations* and *Instances* sections.
117117

118-
image::akri-extension-configurations.png[]
118+
image::akri-extension-configurations.png[scaledwidth=100%]
119119

120120
The configurations list provides information about `Configuration Discovery Handler` and number of instances. Clicking the name opens a configuration detail page.
121121

122-
image::akri-extension-configuration-detail.png[]
122+
image::akri-extension-configuration-detail.png[scaledwidth=100%]
123123

124124
You can also edit or create a new *Configuration*. The extension allows you to select discovery handler, set up broker pod or job, customize configurations and instance services, and set the configuration capacity.
125125

126-
image::akri-extension-configuration-edit.png[]
126+
image::akri-extension-configuration-edit.png[scaledwidth=100%]
127127

128128
Discovered devices are listed in the *Instances* list.
129129

130-
image::akri-extension-instances-list.png[]
130+
image::akri-extension-instances-list.png[scaledwidth=100%]
131131

132132
Clicking the *Instance* name opens a detail page allowing to view the workloads and instance service.
133133

134-
image::akri-extension-instance-detail.png[]
134+
image::akri-extension-instance-detail.png[scaledwidth=100%]

asciidoc/components/fleet.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Fleet shines as an integrated part of Rancher. Clusters managed with Rancher aut
3535

3636
Fleet comes preinstalled in Rancher and is managed by the *Continuous Delivery* option in the Rancher UI.
3737

38-
image::fleet-dashboard.png[]
38+
image::fleet-dashboard.png[scaledwidth=100%]
3939

4040
Continuous Delivery section consists of following items:
4141

@@ -77,13 +77,13 @@ helm:
7777

7878
3. The Repository creation wizard guides through creation of the Git repo. Provide *Name*, *Repository URL* (referencing the Git repository created in the previous step) and select the appropriate branch or revision. In the case of a more complex repository, specify *Paths* to use multiple directories in a single repository.
7979
+
80-
image::fleet-create-repo1.png[]
80+
image::fleet-create-repo1.png[scaledwidth=100%]
8181

8282
4. Click `Next`.
8383

8484
5. In the next step, you can define where the workloads will get deployed. Cluster selection offers several basic options: you can select no clusters, all clusters, or directly choose a specific managed cluster or cluster group (if defined). The "Advanced" option allows to directly edit the selectors via YAML.
8585
+
86-
image::fleet-create-repo2.png[]
86+
image::fleet-create-repo2.png[scaledwidth=100%]
8787

8888
6. Click `Create`. The repository gets created. From now on, the workloads are installed and kept in sync on the clusters matching the repository definition.
8989

@@ -93,12 +93,12 @@ The "Advanced" navigation section provides overviews of lower-level Fleet resour
9393

9494
To find bundles relevant to a specific repository, go to the Git repo detail page and click the `Bundles` tab.
9595

96-
image::fleet-repo-bundles.png[]
96+
image::fleet-repo-bundles.png[scaledwidth=100%]
9797

9898
For each cluster, the bundle is applied to a BundleDeployment resource that is created. To view BundleDeployment details, click the `Graph` button in the upper right of the Git repo detail page.
9999
A graph of *Repo > Bundles > BundleDeployments* is loaded. Click the BundleDeployment in the graph to see its details and click the `Id` to view the BundleDeployment YAML.
100100

101-
image::fleet-repo-graph.png[]
101+
image::fleet-repo-graph.png[scaledwidth=100%]
102102

103103
For additional information on Fleet troubleshooting tips, refer https://fleet.rancher.io/troubleshooting[here].
104104

asciidoc/components/rancher-dashboard-extensions.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -37,17 +37,17 @@ Akri Dashboard Extension Repository URL:
3737
KubeVirt Dashboard Extension Repository URL:
3838
`oci://registry.suse.com/edge/{version-edge-registry}/kubevirt-dashboard-extension-chart`
3939
+
40-
image::dashboard-extensions-create-oci-repository.png[]
40+
image::dashboard-extensions-create-oci-repository.png[scaledwidth=100%]
4141

4242
. You can see that the extension repository is added to the list and is in `Active` state.
4343
+
44-
image::dashboard-extensions-repositories-list.png[]
44+
image::dashboard-extensions-repositories-list.png[scaledwidth=100%]
4545

4646
. Navigate back to the *Extensions* in the *Configuration* section of the navigation sidebar.
4747
+
4848
In the *Available* tab you can see the extensions available for installation.
4949
+
50-
image::dashboard-extensions-available-extensions.png[]
50+
image::dashboard-extensions-available-extensions.png[scaledwidth=100%]
5151

5252
. On the extension card click `Install` and confirm the installation.
5353
+
@@ -125,7 +125,7 @@ For more information, see <<components-fleet>> and the `https://github.com/suse-
125125

126126
Once the Extensions are installed they are listed in *Extensions* section under *Installed* tabs. Since they are not installed via Apps/Marketplace, they are marked with `Third-Party` label.
127127

128-
image::installed-dashboard-extensions.png[]
128+
image::installed-dashboard-extensions.png[scaledwidth=100%]
129129

130130
== KubeVirt Dashboard Extension
131131

asciidoc/day2/fleet-helm-upgrade.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -529,7 +529,7 @@ As mentioned previously this will trigger the `helm-controller` which will perfo
529529

530530
Below you can find a diagram of the above description:
531531

532-
image::fleet-day2-{cluster-type}-helm-eib-upgrade.png[]
532+
image::fleet-day2-{cluster-type}-helm-eib-upgrade.png[scaledwidth=100%]
533533

534534
[#{cluster-type}-day2-fleet-helm-upgrade-procedure-eib-deployed-chart-upgrade-steps]
535535
===== Upgrade Steps
@@ -884,7 +884,7 @@ endif::[]
884884
. Deploy the Bundle through the Rancher UI:
885885
+
886886
.Deploy Bundle through Rancher UI
887-
image::day2_helm_chart_upgrade_example_1.png[]
887+
image::day2_helm_chart_upgrade_example_1.png[scaledwidth=100%]
888888
+
889889
From here, select *Read from File* and find the `bundle.yaml` file on your system.
890890
+
@@ -895,13 +895,13 @@ Select *Create*.
895895
. After a successful deployment, your Bundle would look similar to:
896896
+
897897
.Successfully deployed Bundle
898-
image::day2_helm_chart_upgrade_example_2.png[]
898+
image::day2_helm_chart_upgrade_example_2.png[scaledwidth=100%]
899899

900900
After the successful deployment of the `Bundle`, to monitor the upgrade process:
901901

902902
. Verify the logs of the `Upgrade Pod`:
903903
+
904-
image::day2_helm_chart_upgrade_example_3_{cluster-type}.png[]
904+
image::day2_helm_chart_upgrade_example_3_{cluster-type}.png[scaledwidth=100%]
905905

906906
. Now verify the logs of the Pod created for the upgrade by the helm-controller:
907907

@@ -911,7 +911,7 @@ image::day2_helm_chart_upgrade_example_3_{cluster-type}.png[]
911911
+
912912
.Logs for successfully upgraded Longhorn chart
913913
+
914-
image::day2_helm_chart_upgrade_example_4_{cluster-type}.png[]
914+
image::day2_helm_chart_upgrade_example_4_{cluster-type}.png[scaledwidth=100%]
915915

916916
. Verify that the `HelmChart` version has been updated by navigating to Rancher's `HelmCharts` section (`More Resources -> HelmCharts`). Select the namespace where the chart was deployed, for this example it would be `kube-system`.
917917

asciidoc/day2/fleet-k8s-upgrade.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ Once the `K8s SUC plans` are deployed, the workflow looks like this:
9696

9797
Below you can find a diagram of the above description:
9898

99-
image::fleet-day2-{cluster-type}-k8s-upgrade.png[]
99+
image::fleet-day2-{cluster-type}-k8s-upgrade.png[scaledwidth=100%]
100100

101101
[#{cluster-type}-day2-fleet-k8s-upgrade-requirements]
102102
=== Requirements

asciidoc/day2/fleet-os-upgrade.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ Once the OS upgrade process finishes, the corresponding node will be `rebooted`
8787

8888
Below you can find a diagram of the above description:
8989

90-
image::fleet-day2-{cluster-type}-os-upgrade.png[]
90+
image::fleet-day2-{cluster-type}-os-upgrade.png[scaledwidth=100%]
9191

9292
[#{cluster-type}-day2-fleet-os-upgrade-requirements]
9393
=== Requirements

asciidoc/edge-book/welcome.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ SUSE Edge is comprised of both existing SUSE and Rancher components along with a
3636

3737
==== Management Cluster
3838

39-
image::suse-edge-management-cluster.svg[]
39+
image::suse-edge-management-cluster.svg[scaledwidth=100%]
4040

4141
* *Management*: This is the centralized part of SUSE Edge that is used to manage the provisioning and lifecycle of connected downstream clusters. The management cluster typically includes the following components:
4242
** Multi-cluster management with <<components-rancher,Rancher Prime>>, enabling a common dashboard for downstream cluster onboarding and ongoing lifecycle management of infrastructure and applications, also providing comprehensive tenant isolation and `IDP` (Identity Provider) integrations, a large marketplace of third-party integrations and extensions, and a vendor-neutral API.
@@ -49,7 +49,7 @@ image::suse-edge-management-cluster.svg[]
4949

5050
==== Downstream Clusters
5151

52-
image::suse-edge-downstream-cluster.svg[]
52+
image::suse-edge-downstream-cluster.svg[scaledwidth=100%]
5353

5454
* *Downstream*: This is the distributed part of SUSE Edge that is used to run the user workloads at the Edge, i.e. the software that is running at the edge location itself, and is typically comprised of the following components:
5555
** A choice of Kubernetes distributions, with secure and lightweight distributions like <<components-k3s,K3s>> and <<components-rke2,RKE2>> (`RKE2` is hardened, certified and optimized for usage in government and regulated industries).
@@ -60,7 +60,7 @@ image::suse-edge-downstream-cluster.svg[]
6060

6161
=== Connectivity
6262

63-
image::suse-edge-connected-architecture.svg[]
63+
image::suse-edge-connected-architecture.svg[scaledwidth=100%]
6464

6565
The above image provides a high-level architectural overview for *connected* downstream clusters and their attachment to the management cluster. The management cluster can be deployed on a wide variety of underlying infrastructure platforms, in both on-premises and cloud capacities, depending on networking availability between the downstream clusters and the target management cluster. The only requirement for this to function are API and callback URL's to be accessible over the network that connects downstream cluster nodes to the management infrastructure.
6666

asciidoc/guides/air-gapped-eib-deployments.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -400,7 +400,7 @@ replicaset.apps/system-upgrade-controller-56696956b 1 1 1
400400

401401
And when we go to `\https://192.168.100.50.sslip.io` and log in with the `adminadminadmin` password that we set earlier, we are greeted with the Rancher dashboard:
402402

403-
image::air-gapped-rancher.png[]
403+
image::air-gapped-rancher.png[scaledwidth=100%]
404404

405405
== SUSE Security Installation [[suse-security-install]]
406406

asciidoc/guides/clusterclass.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ The implementation of ClusterClass yields several key advantages that address th
3232
* Improved Scalability and Automation Capabilities
3333
* Declarative Management and Robust Version Control
3434

35-
image::clusterclass.png[]
35+
image::clusterclass.png[scaledwidth=100%]
3636

3737

3838

asciidoc/integrations/nvidia-slemicro.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ We recommend that you ensure that the driver version that you are selecting is c
4646
====
4747
To find the NVIDIA open-driver versions, either run `zypper se -s nvidia-open-driver` on the target machine _or_ search the SUSE Customer Center for the "nvidia-open-driver" in {link-nvidia-open-driver}[SUSE Linux Micro {version-operatingsystem} for {x86-64}].
4848
49-
image::scc-packages-nvidia.png[SUSE Customer Centre]
49+
image::scc-packages-nvidia.png[SUSE Customer Centre,scaledwidth=100%]
5050
====
5151

5252
When you have confirmed that an equivalent version is available in the NVIDIA repos, you are ready to install the packages on the host operating system. For this, we need to open up a `transactional-update` session, which creates a new read/write snapshot of the underlying operating system so we can make changes to the immutable platform (for further instructions on `transactional-update`, see {link-micro-transactional-updates}[here]):

0 commit comments

Comments
 (0)