docs: fix typo

Signed-off-by: David Ko <dko@suse.com>
This commit is contained in:
David Ko 2022-12-12 14:18:16 +08:00
parent c6506097fd
commit 4f35fda4b2
5 changed files with 10 additions and 10 deletions

View File

@ -976,11 +976,11 @@ Scenario: test recurring job concurrency
create volume `test-job-4`. create volume `test-job-4`.
create volume `test-job-5`. create volume `test-job-5`.
Then moniter the cron job pod log. Then monitor the cron job pod log.
And should see 2 jobs created concurrently. And should see 2 jobs created concurrently.
When update `snapshot1` recurring job with `concurrency` set to `3`. When update `snapshot1` recurring job with `concurrency` set to `3`.
Then moniter the cron job pod log. Then monitor the cron job pod log.
And should see 3 jobs created concurrently. And should see 3 jobs created concurrently.
### Upgrade strategy ### Upgrade strategy

View File

@ -329,7 +329,7 @@ After the enhancement, users can directly specify the BackingImage during volume
- Longhorn needs to verify the BackingImage if it's specified. - Longhorn needs to verify the BackingImage if it's specified.
- For restore/DR volumes, the BackingImage name stored in the backup volume will be used automatically if users do not specify the BackingImage name. Verify the checksum before using the BackingImage. - For restore/DR volumes, the BackingImage name stored in the backup volume will be used automatically if users do not specify the BackingImage name. Verify the checksum before using the BackingImage.
- Snapshot backup: - Snapshot backup:
- BackingImage name and checksum will be recored into BackupVolume now. - BackingImage name and checksum will be record into BackupVolume now.
- BackingImage creation: - BackingImage creation:
- Need to create both BackingImage CR and the BackingImageDataSource CR. Besides, a random ready disk will be picked up so that Longhorn can prepare the 1st file for the BackingImage immediately. - Need to create both BackingImage CR and the BackingImageDataSource CR. Besides, a random ready disk will be picked up so that Longhorn can prepare the 1st file for the BackingImage immediately.
- BackingImage get/list: - BackingImage get/list:
@ -353,7 +353,7 @@ After the enhancement, users can directly specify the BackingImage during volume
- The server will download the file immediately once the type is `download` and the server is up. - The server will download the file immediately once the type is `download` and the server is up.
- A cancelled context will be put the HTTP download request. When the server is stopped/failed while downloading is still in-progress, the context can help stop the download. - A cancelled context will be put the HTTP download request. When the server is stopped/failed while downloading is still in-progress, the context can help stop the download.
- The service will wait for 30s at max for download start. If time exceeds, the download is considered as failed. - The service will wait for 30s at max for download start. If time exceeds, the download is considered as failed.
- The download file is in `<Disk path in containter>/tmp/<BackingImage name>-<BackingImage UUID>` - The download file is in `<Disk path in container>/tmp/<BackingImage name>-<BackingImage UUID>`
- Each time when the image downloads a chunk of data, the progress will be updated. For the first time updating the progress, it means the downloading starts and the state will be updated from `starting` to `in-progress`. - Each time when the image downloads a chunk of data, the progress will be updated. For the first time updating the progress, it means the downloading starts and the state will be updated from `starting` to `in-progress`.
- The server is ready for handling the uploaded data once the type is `upload` and the server is up. - The server is ready for handling the uploaded data once the type is `upload` and the server is up.
- The query `size` is required for the API `upload`. - The query `size` is required for the API `upload`.

View File

@ -250,7 +250,7 @@ Integration test plan.
* Scale down the workload to detach the `test-vol` * Scale down the workload to detach the `test-vol`
* Create the same PVC `test-restore-pvc` as in the `Source volume is attached && Longhorn snapshot exist` section * Create the same PVC `test-restore-pvc` as in the `Source volume is attached && Longhorn snapshot exist` section
* Verify that PVC provisioning failed because the source volume is detached so Longhorn cannot verify the existence of the Longhorn snapshot in the source volume. * Verify that PVC provisioning failed because the source volume is detached so Longhorn cannot verify the existence of the Longhorn snapshot in the source volume.
* Scale up the workload to attache `test-vol` * Scale up the workload to attach `test-vol`
* Wait for PVC to finish provisioning and be bounded * Wait for PVC to finish provisioning and be bounded
* Attach the PVC `test-restore-pvc` and verify the data * Attach the PVC `test-restore-pvc` and verify the data
* Delete the PVC * Delete the PVC

View File

@ -39,7 +39,7 @@ After this enhancement, users will be able to use kubectl to query/create/delete
The experience details should be in the `User Experience In Detail` later. The experience details should be in the `User Experience In Detail` later.
#### Story 1 #### Story 1
User wants to limit the snapshot count to save space. Snapshot RecurringJobs set to Retain X number of snapshots do not touch unrelated snapshots, so if one ever changes the name of the RecurringJob, the old snapshots will stick around forever. These then have to be manually deleted in the UI. There might be some kind of browser automation framework might also work for pruning large numbers of snapshots, but this feels janky. Having a CRD for snapshots would greatly simplify this, as one could prune snapshots using kubectl, much like how one can currently manage backups using kubectl due to the existance of the `backups.longhorn.io` CRD. User wants to limit the snapshot count to save space. Snapshot RecurringJobs set to Retain X number of snapshots do not touch unrelated snapshots, so if one ever changes the name of the RecurringJob, the old snapshots will stick around forever. These then have to be manually deleted in the UI. There might be some kind of browser automation framework might also work for pruning large numbers of snapshots, but this feels janky. Having a CRD for snapshots would greatly simplify this, as one could prune snapshots using kubectl, much like how one can currently manage backups using kubectl due to the existence of the `backups.longhorn.io` CRD.
### User Experience In Detail ### User Experience In Detail
@ -106,9 +106,9 @@ The life cycle of a snapshot CR is as below:
1. **Create** 1. **Create**
1. When a snapshot CR is created, Longhorn mutation webhook will: 1. When a snapshot CR is created, Longhorn mutation webhook will:
1. Add a volume label `longhornvolume: <VOLUME-NAME>` to the snapshot CR. This allow us to efficiently find snapshots corressponding to a volume without having listing potientially thoundsands of snapshots. 1. Add a volume label `longhornvolume: <VOLUME-NAME>` to the snapshot CR. This allow us to efficiently find snapshots corresponding to a volume without having listing potientially thoundsands of snapshots.
1. Add `longhornFinalizerKey` to snapshot CR to prevent it from being removed before Longhorn has change to clean up the corresponding snapshot 1. Add `longhornFinalizerKey` to snapshot CR to prevent it from being removed before Longhorn has change to clean up the corresponding snapshot
1. Populate the value for `snapshot.OwnerReferences` to uniquely indentify the volume of this snapshot. This field contains the volume UID to uniquely identify the volume in case the old volume was deleted and a new volume was created with the same name. 1. Populate the value for `snapshot.OwnerReferences` to uniquely identify the volume of this snapshot. This field contains the volume UID to uniquely identify the volume in case the old volume was deleted and a new volume was created with the same name.
2. For user created snapshot CR, the field `Spec.CreateSnapshot` should be set to `true` indicating that Longhorn should provision a new snapshot for this CR. 2. For user created snapshot CR, the field `Spec.CreateSnapshot` should be set to `true` indicating that Longhorn should provision a new snapshot for this CR.
1. Longhorn snapshot controller will pick up this CR, check to see if there already is a snapshot inside the `engine.Status.Snapshots`. 1. Longhorn snapshot controller will pick up this CR, check to see if there already is a snapshot inside the `engine.Status.Snapshots`.
1. If there is there already a snapshot inside engine.Status.Snapshots, update the snapshot.Status with the snapshot info inside `engine.Status.Snapshots` 1. If there is there already a snapshot inside engine.Status.Snapshots, update the snapshot.Status with the snapshot info inside `engine.Status.Snapshots`
@ -147,7 +147,7 @@ Anything that requires if user want to upgrade to this enhancement
How do we address scalability issue? How do we address scalability issue?
1. Controller workqueue 1. Controller workqueue
1. Disable resync period for snapshot informer 1. Disable resync period for snapshot informer
1. Enque snapshot only when: 1. Enqueue snapshot only when:
1. There is a change in snapshot CR 1. There is a change in snapshot CR
1. There is a change in `engine.Status.CurrentState` (volume attach/detach event), `engine.Status.PurgeStatus` (for snapshot deletion event), `engine.Status.Snapshots` (for snapshot creation/update event) 1. There is a change in `engine.Status.CurrentState` (volume attach/detach event), `engine.Status.PurgeStatus` (for snapshot deletion event), `engine.Status.Snapshots` (for snapshot creation/update event)
1. This enhancement proposal doesn't make additional call to engine process comparing to the existing design. 1. This enhancement proposal doesn't make additional call to engine process comparing to the existing design.

View File

@ -57,7 +57,7 @@ After the enhancement, users can directly reclaim the space by trimming the file
1. The process creation function should specify the option `unmap-mark-disk-chain-removed`. 1. The process creation function should specify the option `unmap-mark-disk-chain-removed`.
#### longhorn-engine: #### longhorn-engine:
1. Update dependency `rancher/tgt`, `longhorn/longhornlib`, and `longhorn/sparse-tools` for the opertaion `UNMAP` support. 1. Update dependency `rancher/tgt`, `longhorn/longhornlib`, and `longhorn/sparse-tools` for the operation `UNMAP` support.
2. Add new option `unmap-mark-snap-chain-removed` for the engine process creation call. 2. Add new option `unmap-mark-snap-chain-removed` for the engine process creation call.
Add new option `unmap-mark-disk-chain-removed` for the replica process creation call. Add new option `unmap-mark-disk-chain-removed` for the replica process creation call.
3. Add a new API `unmap-mark-snap-chain-removed` to update the field for the engine and all its replicas. 3. Add a new API `unmap-mark-snap-chain-removed` to update the field for the engine and all its replicas.