Compare commits
No commits in common. "master" and "v0.8.0" have entirely different histories.
@ -1,5 +0,0 @@
|
|||||||
aks
|
|
||||||
ec2
|
|
||||||
eks
|
|
||||||
gce
|
|
||||||
gcp
|
|
1
.github/CODEOWNERS
vendored
1
.github/CODEOWNERS
vendored
@ -1 +0,0 @@
|
|||||||
* @longhorn/dev
|
|
48
.github/ISSUE_TEMPLATE/bug.md
vendored
48
.github/ISSUE_TEMPLATE/bug.md
vendored
@ -1,48 +0,0 @@
|
|||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Create a bug report
|
|
||||||
title: "[BUG]"
|
|
||||||
labels: ["kind/bug", "require/qa-review-coverage", "require/backport"]
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Describe the bug (🐛 if you encounter this issue)
|
|
||||||
|
|
||||||
<!--A clear and concise description of what the bug is.-->
|
|
||||||
|
|
||||||
## To Reproduce
|
|
||||||
|
|
||||||
<!--Provide the steps to reproduce the behavior.-->
|
|
||||||
|
|
||||||
## Expected behavior
|
|
||||||
|
|
||||||
<!--A clear and concise description of what you expected to happen.-->
|
|
||||||
|
|
||||||
## Support bundle for troubleshooting
|
|
||||||
|
|
||||||
<!--Provide a support bundle when the issue happens. You can generate a support bundle using the link at the footer of the Longhorn UI. Check [here](https://longhorn.io/docs/latest/advanced-resources/support-bundle/).-->
|
|
||||||
|
|
||||||
## Environment
|
|
||||||
|
|
||||||
<!-- Suggest checking the doc of the best practices of using Longhorn. [here](https://longhorn.io/docs/1.5.1/best-practices)-->
|
|
||||||
- Longhorn version:
|
|
||||||
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl):
|
|
||||||
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version:
|
|
||||||
- Number of management node in the cluster:
|
|
||||||
- Number of worker node in the cluster:
|
|
||||||
- Node config
|
|
||||||
- OS type and version:
|
|
||||||
- Kernel version:
|
|
||||||
- CPU per node:
|
|
||||||
- Memory per node:
|
|
||||||
- Disk type(e.g. SSD/NVMe/HDD):
|
|
||||||
- Network bandwidth between the nodes:
|
|
||||||
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal):
|
|
||||||
- Number of Longhorn volumes in the cluster:
|
|
||||||
- Impacted Longhorn resources:
|
|
||||||
- Volume names:
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!--Add any other context about the problem here.-->
|
|
16
.github/ISSUE_TEMPLATE/doc.md
vendored
16
.github/ISSUE_TEMPLATE/doc.md
vendored
@ -1,16 +0,0 @@
|
|||||||
---
|
|
||||||
name: Document
|
|
||||||
about: Create or update document
|
|
||||||
title: "[DOC] "
|
|
||||||
labels: kind/doc
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## What's the document you plan to update? Why? Please describe
|
|
||||||
|
|
||||||
<!--A clear and concise description of what the document is.-->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!--Add any other context or screenshots about the document request here.-->
|
|
24
.github/ISSUE_TEMPLATE/feature.md
vendored
24
.github/ISSUE_TEMPLATE/feature.md
vendored
@ -1,24 +0,0 @@
|
|||||||
---
|
|
||||||
name: Feature request
|
|
||||||
about: Suggest an idea/feature
|
|
||||||
title: "[FEATURE] "
|
|
||||||
labels: ["kind/enhancement", "require/lep", "require/doc", "require/auto-e2e-test"]
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Is your feature request related to a problem? Please describe (👍 if you like this request)
|
|
||||||
|
|
||||||
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
|
|
||||||
|
|
||||||
## Describe the solution you'd like
|
|
||||||
|
|
||||||
<!--A clear and concise description of what you want to happen-->
|
|
||||||
|
|
||||||
## Describe alternatives you've considered
|
|
||||||
|
|
||||||
<!--A clear and concise description of any alternative solutions or features you've considered.-->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!--Add any other context or screenshots about the feature request here.-->
|
|
24
.github/ISSUE_TEMPLATE/improvement.md
vendored
24
.github/ISSUE_TEMPLATE/improvement.md
vendored
@ -1,24 +0,0 @@
|
|||||||
---
|
|
||||||
name: Improvement request
|
|
||||||
about: Suggest an improvement of an existing feature
|
|
||||||
title: "[IMPROVEMENT] "
|
|
||||||
labels: ["kind/improvement", "require/doc", "require/auto-e2e-test", "require/backport"]
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Is your improvement request related to a feature? Please describe (👍 if you like this request)
|
|
||||||
|
|
||||||
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
|
|
||||||
|
|
||||||
## Describe the solution you'd like
|
|
||||||
|
|
||||||
<!--A clear and concise description of what you want to happen.-->
|
|
||||||
|
|
||||||
## Describe alternatives you've considered
|
|
||||||
|
|
||||||
<!--A clear and concise description of any alternative solutions or features you've considered.-->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!--Add any other context or screenshots about the feature request here.-->
|
|
24
.github/ISSUE_TEMPLATE/infra.md
vendored
24
.github/ISSUE_TEMPLATE/infra.md
vendored
@ -1,24 +0,0 @@
|
|||||||
---
|
|
||||||
name: Infra
|
|
||||||
about: Create an test/dev infra task
|
|
||||||
title: "[INFRA] "
|
|
||||||
labels: kind/infra
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## What's the test to develop? Please describe
|
|
||||||
|
|
||||||
<!--A clear and concise description of what test/dev infra you want to develop.-->
|
|
||||||
|
|
||||||
## Describe the items of the test development (DoD, definition of done) you'd like
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
|
||||||
|
|
||||||
- [ ] `item 1`
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!--Add any other context or screenshots about the test infra request here.-->
|
|
28
.github/ISSUE_TEMPLATE/question.md
vendored
28
.github/ISSUE_TEMPLATE/question.md
vendored
@ -1,28 +0,0 @@
|
|||||||
---
|
|
||||||
name: Question
|
|
||||||
about: Have a question
|
|
||||||
title: "[QUESTION] "
|
|
||||||
labels: kind/question
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
## Question
|
|
||||||
|
|
||||||
<!--Suggest to use https://github.com/longhorn/longhorn/discussions to ask questions.-->
|
|
||||||
|
|
||||||
## Environment
|
|
||||||
|
|
||||||
- Longhorn version:
|
|
||||||
- Kubernetes version:
|
|
||||||
- Node config
|
|
||||||
- OS type and version
|
|
||||||
- Kernel version
|
|
||||||
- CPU per node:
|
|
||||||
- Memory per node:
|
|
||||||
- Disk type
|
|
||||||
- Network bandwidth and latency between the nodes:
|
|
||||||
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal):
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!--Add any other context about the problem here.-->
|
|
24
.github/ISSUE_TEMPLATE/refactor.md
vendored
24
.github/ISSUE_TEMPLATE/refactor.md
vendored
@ -1,24 +0,0 @@
|
|||||||
---
|
|
||||||
name: Refactor request
|
|
||||||
about: Suggest a refactoring request for an existing implementation
|
|
||||||
title: "[REFACTOR] "
|
|
||||||
labels: kind/refactoring
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Is your improvement request related to a feature? Please describe
|
|
||||||
|
|
||||||
<!--A clear and concise description of what the problem is.-->
|
|
||||||
|
|
||||||
## Describe the solution you'd like
|
|
||||||
|
|
||||||
<!--A clear and concise description of what you want to happen.-->
|
|
||||||
|
|
||||||
## Describe alternatives you've considered
|
|
||||||
|
|
||||||
<!--A clear and concise description of any alternative solutions or features you've considered.-->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!--Add any other context or screenshots about the refactoring request here.-->
|
|
35
.github/ISSUE_TEMPLATE/release.md
vendored
35
.github/ISSUE_TEMPLATE/release.md
vendored
@ -1,35 +0,0 @@
|
|||||||
---
|
|
||||||
name: Release task
|
|
||||||
about: Create a release task
|
|
||||||
title: "[RELEASE]"
|
|
||||||
labels: release/task
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**What's the task? Please describe.**
|
|
||||||
Action items for releasing v<x.y.z>
|
|
||||||
|
|
||||||
**Describe the sub-tasks.**
|
|
||||||
- Pre-Release
|
|
||||||
- [ ] Regression test plan (manual) - @khushboo-rancher
|
|
||||||
- [ ] Run e2e regression for pre-GA milestones (`install`, `upgrade`) - @yangchiu
|
|
||||||
- [ ] Run security testing of container images for pre-GA milestones - @yangchiu
|
|
||||||
- [ ] Verify longhorn chart PR to ensure all artifacts are ready for GA (`install`, `upgrade`) @chriscchien
|
|
||||||
- [ ] Run core testing (install, upgrade) for the GA build from the previous patch and the last patch of the previous feature release (1.4.2). - @yangchiu
|
|
||||||
- Release
|
|
||||||
- [ ] Release longhorn/chart from the release branch to publish to ArtifactHub
|
|
||||||
- [ ] Release note
|
|
||||||
- [ ] Deprecation note
|
|
||||||
- [ ] Upgrade notes including highlighted notes, deprecation, compatible changes, and others impacting the current users
|
|
||||||
- Post-Release
|
|
||||||
- [ ] Create a new release branch of manager/ui/tests/engine/longhorn instance-manager/share-manager/backing-image-manager when creating the RC1
|
|
||||||
- [ ] Update https://github.com/longhorn/longhorn/blob/master/deploy/upgrade_responder_server/chart-values.yaml @PhanLe1010
|
|
||||||
- [ ] Add another request for the rancher charts for the next patch release (`1.5.1`) @rebeccazzzz
|
|
||||||
- Rancher charts: verify the chart is able to install & upgrade - @khushboo-rancher
|
|
||||||
- [ ] rancher/image-mirrors update @weizhe0422 (@PhanLe1010 )
|
|
||||||
- https://github.com/rancher/image-mirror/pull/412
|
|
||||||
- [ ] rancher/charts 2.7 branches for rancher marketplace @weizhe0422 (@PhanLe1010)
|
|
||||||
- `dev-2.7`: https://github.com/rancher/charts/pull/2766
|
|
||||||
|
|
||||||
cc @longhorn/qa @longhorn/dev
|
|
24
.github/ISSUE_TEMPLATE/task.md
vendored
24
.github/ISSUE_TEMPLATE/task.md
vendored
@ -1,24 +0,0 @@
|
|||||||
---
|
|
||||||
name: Task
|
|
||||||
about: Create a general task
|
|
||||||
title: "[TASK] "
|
|
||||||
labels: kind/task
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## What's the task? Please describe
|
|
||||||
|
|
||||||
<!--A clear and concise description of what the task is.-->
|
|
||||||
|
|
||||||
## Describe the sub-tasks
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
|
||||||
|
|
||||||
- [ ] `item 1`
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!--Add any other context or screenshots about the task request here.-->
|
|
24
.github/ISSUE_TEMPLATE/test.md
vendored
24
.github/ISSUE_TEMPLATE/test.md
vendored
@ -1,24 +0,0 @@
|
|||||||
---
|
|
||||||
name: Test
|
|
||||||
about: Create or update test
|
|
||||||
title: "[TEST] "
|
|
||||||
labels: kind/test
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## What's the test to develop? Please describe
|
|
||||||
|
|
||||||
<!--A clear and concise description of what test you want to develop.-->
|
|
||||||
|
|
||||||
## Describe the tasks for the test
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
|
||||||
|
|
||||||
- [ ] `item 1`
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!--Add any other context or screenshots about the test request here.-->
|
|
34
.github/mergify.yml
vendored
34
.github/mergify.yml
vendored
@ -1,34 +0,0 @@
|
|||||||
pull_request_rules:
|
|
||||||
- name: automatic merge after review
|
|
||||||
conditions:
|
|
||||||
- check-success=continuous-integration/drone/pr
|
|
||||||
- check-success=DCO
|
|
||||||
- check-success=CodeFactor
|
|
||||||
- check-success=codespell
|
|
||||||
- "#approved-reviews-by>=1"
|
|
||||||
- approved-reviews-by=@longhorn/maintainer
|
|
||||||
- label=ready-to-merge
|
|
||||||
actions:
|
|
||||||
merge:
|
|
||||||
method: rebase
|
|
||||||
|
|
||||||
- name: ask to resolve conflict
|
|
||||||
conditions:
|
|
||||||
- conflict
|
|
||||||
actions:
|
|
||||||
comment:
|
|
||||||
message: This pull request is now in conflicts. Could you fix it @{{author}}? 🙏
|
|
||||||
|
|
||||||
# Comment on the PR to trigger backport. ex: @Mergifyio copy stable/3.1 stable/4.0
|
|
||||||
- name: backport patches to stable branch
|
|
||||||
conditions:
|
|
||||||
- base=master
|
|
||||||
actions:
|
|
||||||
backport:
|
|
||||||
title: "[BACKPORT][{{ destination_branch }}] {{ title }}"
|
|
||||||
body: |
|
|
||||||
This is an automatic backport of pull request #{{number}}.
|
|
||||||
|
|
||||||
{{cherry_pick_error}}
|
|
||||||
assignees:
|
|
||||||
- "{{ author }}"
|
|
63
.github/stale.yml
vendored
Normal file
63
.github/stale.yml
vendored
Normal file
@ -0,0 +1,63 @@
|
|||||||
|
# Configuration for probot-stale - https://github.com/probot/stale
|
||||||
|
|
||||||
|
# Number of days of inactivity before an Issue or Pull Request becomes stale
|
||||||
|
daysUntilStale: 60
|
||||||
|
|
||||||
|
# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.
|
||||||
|
# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.
|
||||||
|
daysUntilClose: 7
|
||||||
|
|
||||||
|
# Only issues or pull requests with all of these labels are check if stale. Defaults to `[]` (disabled)
|
||||||
|
onlyLabels: []
|
||||||
|
|
||||||
|
# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable
|
||||||
|
exemptLabels:
|
||||||
|
- bug
|
||||||
|
- doc
|
||||||
|
- enhancement
|
||||||
|
- poc
|
||||||
|
- refactoring
|
||||||
|
|
||||||
|
# Set to true to ignore issues in a project (defaults to false)
|
||||||
|
exemptProjects: true
|
||||||
|
|
||||||
|
# Set to true to ignore issues in a milestone (defaults to false)
|
||||||
|
exemptMilestones: true
|
||||||
|
|
||||||
|
# Set to true to ignore issues with an assignee (defaults to false)
|
||||||
|
exemptAssignees: true
|
||||||
|
|
||||||
|
# Label to use when marking as stale
|
||||||
|
staleLabel: wontfix
|
||||||
|
|
||||||
|
# Comment to post when marking as stale. Set to `false` to disable
|
||||||
|
markComment: >
|
||||||
|
This issue has been automatically marked as stale because it has not had
|
||||||
|
recent activity. It will be closed if no further activity occurs. Thank you
|
||||||
|
for your contributions.
|
||||||
|
|
||||||
|
# Comment to post when removing the stale label.
|
||||||
|
# unmarkComment: >
|
||||||
|
# Your comment here.
|
||||||
|
|
||||||
|
# Comment to post when closing a stale Issue or Pull Request.
|
||||||
|
# closeComment: >
|
||||||
|
# Your comment here.
|
||||||
|
|
||||||
|
# Limit the number of actions per hour, from 1-30. Default is 30
|
||||||
|
limitPerRun: 30
|
||||||
|
|
||||||
|
# Limit to only `issues` or `pulls`
|
||||||
|
# only: issues
|
||||||
|
|
||||||
|
# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls':
|
||||||
|
# pulls:
|
||||||
|
# daysUntilStale: 30
|
||||||
|
# markComment: >
|
||||||
|
# This pull request has been automatically marked as stale because it has not had
|
||||||
|
# recent activity. It will be closed if no further activity occurs. Thank you
|
||||||
|
# for your contributions.
|
||||||
|
|
||||||
|
# issues:
|
||||||
|
# exemptLabels:
|
||||||
|
# - confirmed
|
40
.github/workflows/add-to-projects.yml
vendored
40
.github/workflows/add-to-projects.yml
vendored
@ -1,40 +0,0 @@
|
|||||||
name: Add-To-Projects
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [ opened, labeled ]
|
|
||||||
jobs:
|
|
||||||
community:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Is Longhorn Member
|
|
||||||
uses: tspascoal/get-user-teams-membership@v1.0.4
|
|
||||||
id: is-longhorn-member
|
|
||||||
with:
|
|
||||||
username: ${{ github.event.issue.user.login }}
|
|
||||||
organization: longhorn
|
|
||||||
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
|
||||||
- name: Add To Community Project
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] == null
|
|
||||||
uses: actions/add-to-project@v0.3.0
|
|
||||||
with:
|
|
||||||
project-url: https://github.com/orgs/longhorn/projects/5
|
|
||||||
github-token: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
|
||||||
|
|
||||||
qa:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Is Longhorn Member
|
|
||||||
uses: tspascoal/get-user-teams-membership@v1.0.4
|
|
||||||
id: is-longhorn-member
|
|
||||||
with:
|
|
||||||
username: ${{ github.event.issue.user.login }}
|
|
||||||
organization: longhorn
|
|
||||||
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
|
||||||
- name: Add To QA & DevOps Project
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
|
||||||
uses: actions/add-to-project@v0.3.0
|
|
||||||
with:
|
|
||||||
project-url: https://github.com/orgs/longhorn/projects/4
|
|
||||||
github-token: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
|
||||||
labeled: kind/test, area/infra
|
|
||||||
label-operator: OR
|
|
50
.github/workflows/close-issue.yml
vendored
50
.github/workflows/close-issue.yml
vendored
@ -1,50 +0,0 @@
|
|||||||
name: Close-Issue
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [ unlabeled ]
|
|
||||||
jobs:
|
|
||||||
backport:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
if: contains(github.event.label.name, 'backport/')
|
|
||||||
steps:
|
|
||||||
- name: Get Backport Version
|
|
||||||
uses: xom9ikk/split@v1
|
|
||||||
id: split
|
|
||||||
with:
|
|
||||||
string: ${{ github.event.label.name }}
|
|
||||||
separator: /
|
|
||||||
- name: Check if Backport Issue Exists
|
|
||||||
uses: actions-cool/issues-helper@v3
|
|
||||||
id: if-backport-issue-exists
|
|
||||||
with:
|
|
||||||
actions: 'find-issues'
|
|
||||||
token: ${{ github.token }}
|
|
||||||
title-includes: |
|
|
||||||
[BACKPORT][v${{ steps.split.outputs._1 }}]${{ github.event.issue.title }}
|
|
||||||
- name: Close Backport Issue
|
|
||||||
if: fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] != null
|
|
||||||
uses: actions-cool/issues-helper@v3
|
|
||||||
with:
|
|
||||||
actions: 'close-issue'
|
|
||||||
token: ${{ github.token }}
|
|
||||||
issue-number: ${{ fromJSON(steps.if-backport-issue-exists.outputs.issues)[0].number }}
|
|
||||||
|
|
||||||
automation:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
if: contains(github.event.label.name, 'require/automation-e2e')
|
|
||||||
steps:
|
|
||||||
- name: Check if Automation Issue Exists
|
|
||||||
uses: actions-cool/issues-helper@v3
|
|
||||||
id: if-automation-issue-exists
|
|
||||||
with:
|
|
||||||
actions: 'find-issues'
|
|
||||||
token: ${{ github.token }}
|
|
||||||
title-includes: |
|
|
||||||
[TEST]${{ github.event.issue.title }}
|
|
||||||
- name: Close Automation Test Issue
|
|
||||||
if: fromJSON(steps.if-automation-issue-exists.outputs.issues)[0] != null
|
|
||||||
uses: actions-cool/issues-helper@v3
|
|
||||||
with:
|
|
||||||
actions: 'close-issue'
|
|
||||||
token: ${{ github.token }}
|
|
||||||
issue-number: ${{ fromJSON(steps.if-automation-issue-exists.outputs.issues)[0].number }}
|
|
23
.github/workflows/codespell.yml
vendored
23
.github/workflows/codespell.yml
vendored
@ -1,23 +0,0 @@
|
|||||||
name: Codespell
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- master
|
|
||||||
- "v*.*.*"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
codespell:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 1
|
|
||||||
- name: Check code spell
|
|
||||||
uses: codespell-project/actions-codespell@v1
|
|
||||||
with:
|
|
||||||
check_filenames: true
|
|
||||||
ignore_words_file: .codespellignore
|
|
||||||
skip: "*/**.yaml,*/**.yml,*/**.tpl,./deploy,./dev,./scripts,./uninstall"
|
|
114
.github/workflows/create-issue.yml
vendored
114
.github/workflows/create-issue.yml
vendored
@ -1,114 +0,0 @@
|
|||||||
name: Create-Issue
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [ labeled ]
|
|
||||||
jobs:
|
|
||||||
backport:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
if: contains(github.event.label.name, 'backport/')
|
|
||||||
steps:
|
|
||||||
- name: Is Longhorn Member
|
|
||||||
uses: tspascoal/get-user-teams-membership@v1.0.4
|
|
||||||
id: is-longhorn-member
|
|
||||||
with:
|
|
||||||
username: ${{ github.actor }}
|
|
||||||
organization: longhorn
|
|
||||||
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
|
||||||
- name: Get Backport Version
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
|
||||||
uses: xom9ikk/split@v1
|
|
||||||
id: split
|
|
||||||
with:
|
|
||||||
string: ${{ github.event.label.name }}
|
|
||||||
separator: /
|
|
||||||
- name: Check if Backport Issue Exists
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
|
||||||
uses: actions-cool/issues-helper@v3
|
|
||||||
id: if-backport-issue-exists
|
|
||||||
with:
|
|
||||||
actions: 'find-issues'
|
|
||||||
token: ${{ github.token }}
|
|
||||||
issue-state: 'all'
|
|
||||||
title-includes: |
|
|
||||||
[BACKPORT][v${{ steps.split.outputs._1 }}]${{ github.event.issue.title }}
|
|
||||||
- name: Get Milestone Object
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
|
||||||
uses: longhorn/bot/milestone-action@master
|
|
||||||
id: milestone
|
|
||||||
with:
|
|
||||||
token: ${{ github.token }}
|
|
||||||
repository: ${{ github.repository }}
|
|
||||||
milestone_name: v${{ steps.split.outputs._1 }}
|
|
||||||
- name: Get Labels
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
|
||||||
id: labels
|
|
||||||
run: |
|
|
||||||
RAW_LABELS="${{ join(github.event.issue.labels.*.name, ' ') }}"
|
|
||||||
RAW_LABELS="${RAW_LABELS} kind/backport"
|
|
||||||
echo "RAW LABELS: $RAW_LABELS"
|
|
||||||
LABELS=$(echo "$RAW_LABELS" | sed -r 's/\s*backport\S+//g' | sed -r 's/\s*require\/auto-e2e-test//g' | xargs | sed 's/ /, /g')
|
|
||||||
echo "LABELS: $LABELS"
|
|
||||||
echo "labels=$LABELS" >> $GITHUB_OUTPUT
|
|
||||||
- name: Create Backport Issue
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
|
||||||
uses: dacbd/create-issue-action@v1
|
|
||||||
id: new-issue
|
|
||||||
with:
|
|
||||||
token: ${{ github.token }}
|
|
||||||
title: |
|
|
||||||
[BACKPORT][v${{ steps.split.outputs._1 }}]${{ github.event.issue.title }}
|
|
||||||
body: |
|
|
||||||
backport ${{ github.event.issue.html_url }}
|
|
||||||
labels: ${{ steps.labels.outputs.labels }}
|
|
||||||
milestone: ${{ fromJSON(steps.milestone.outputs.data).number }}
|
|
||||||
assignees: ${{ join(github.event.issue.assignees.*.login, ', ') }}
|
|
||||||
- name: Get Repo Id
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
|
||||||
uses: octokit/request-action@v2.x
|
|
||||||
id: repo
|
|
||||||
with:
|
|
||||||
route: GET /repos/${{ github.repository }}
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ github.token }}
|
|
||||||
- name: Add Backport Issue To Release
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
|
||||||
uses: longhorn/bot/add-zenhub-release-action@master
|
|
||||||
with:
|
|
||||||
zenhub_token: ${{ secrets.ZENHUB_TOKEN }}
|
|
||||||
repo_id: ${{ fromJSON(steps.repo.outputs.data).id }}
|
|
||||||
issue_number: ${{ steps.new-issue.outputs.number }}
|
|
||||||
release_name: ${{ steps.split.outputs._1 }}
|
|
||||||
|
|
||||||
automation:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
if: contains(github.event.label.name, 'require/auto-e2e-test')
|
|
||||||
steps:
|
|
||||||
- name: Is Longhorn Member
|
|
||||||
uses: tspascoal/get-user-teams-membership@v1.0.4
|
|
||||||
id: is-longhorn-member
|
|
||||||
with:
|
|
||||||
username: ${{ github.actor }}
|
|
||||||
organization: longhorn
|
|
||||||
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
|
||||||
- name: Check if Automation Issue Exists
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
|
||||||
uses: actions-cool/issues-helper@v3
|
|
||||||
id: if-automation-issue-exists
|
|
||||||
with:
|
|
||||||
actions: 'find-issues'
|
|
||||||
token: ${{ github.token }}
|
|
||||||
issue-state: 'all'
|
|
||||||
title-includes: |
|
|
||||||
[TEST]${{ github.event.issue.title }}
|
|
||||||
- name: Create Automation Test Issue
|
|
||||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-automation-issue-exists.outputs.issues)[0] == null
|
|
||||||
uses: dacbd/create-issue-action@v1
|
|
||||||
with:
|
|
||||||
token: ${{ github.token }}
|
|
||||||
title: |
|
|
||||||
[TEST]${{ github.event.issue.title }}
|
|
||||||
body: |
|
|
||||||
adding/updating auto e2e test cases for ${{ github.event.issue.html_url }} if they can be automated
|
|
||||||
|
|
||||||
cc @longhorn/qa
|
|
||||||
labels: kind/test
|
|
28
.github/workflows/stale.yaml
vendored
28
.github/workflows/stale.yaml
vendored
@ -1,28 +0,0 @@
|
|||||||
name: 'Close stale issues and PRs'
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_call:
|
|
||||||
workflow_dispatch:
|
|
||||||
schedule:
|
|
||||||
- cron: '30 1 * * *'
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
stale:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/stale@v4
|
|
||||||
with:
|
|
||||||
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
|
|
||||||
stale-pr-message: 'This PR is stale because it has been open 45 days with no activity. Remove stale label or comment or this will be closed in 10 days.'
|
|
||||||
close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.'
|
|
||||||
close-pr-message: 'This PR was closed because it has been stalled for 10 days with no activity.'
|
|
||||||
days-before-stale: 30
|
|
||||||
days-before-pr-stale: 45
|
|
||||||
days-before-close: 5
|
|
||||||
days-before-pr-close: 10
|
|
||||||
stale-issue-label: 'stale'
|
|
||||||
stale-pr-label: 'stale'
|
|
||||||
exempt-all-assignees: true
|
|
||||||
exempt-issue-labels: 'kind/bug,kind/doc,kind/enhancement,kind/poc,kind/refactoring,kind/test,kind/task,kind/backport,kind/regression,kind/evaluation'
|
|
||||||
exempt-draft-pr: true
|
|
||||||
exempt-all-milestones: true
|
|
7
.gitignore
vendored
7
.gitignore
vendored
@ -1,7 +0,0 @@
|
|||||||
# ignores all goland project folders and files
|
|
||||||
.idea
|
|
||||||
*.iml
|
|
||||||
*.ipr
|
|
||||||
|
|
||||||
# python venv for dev scripts
|
|
||||||
.venv
|
|
@ -1,283 +0,0 @@
|
|||||||
## Release Note
|
|
||||||
**v1.4.0 released!** 🎆
|
|
||||||
|
|
||||||
This release introduces many enhancements, improvements, and bug fixes as described below about stability, performance, data integrity, troubleshooting, and so on. Please try it and feedback. Thanks for all the contributions!
|
|
||||||
|
|
||||||
- [Kubernetes 1.25 Support](https://github.com/longhorn/longhorn/issues/4003) [[doc]](https://longhorn.io/docs/1.4.0/deploy/important-notes/#pod-security-policies-disabled--pod-security-admission-introduction)
|
|
||||||
In the previous versions, Longhorn relies on Pod Security Policy (PSP) to authorize Longhorn components for privileged operations. From Kubernetes 1.25, PSP has been removed and replaced with Pod Security Admission (PSA). Longhorn v1.4.0 supports opt-in PSP enablement, so it can support Kubernetes versions with or without PSP.
|
|
||||||
|
|
||||||
- [ARM64 GA](https://github.com/longhorn/longhorn/issues/4206)
|
|
||||||
ARM64 has been experimental from Longhorn v1.1.0. After receiving more user feedback and increasing testing coverage, ARM64 distribution has been stabilized with quality as per our regular regression testing, so it is qualified for general availability.
|
|
||||||
|
|
||||||
- [RWX GA](https://github.com/longhorn/longhorn/issues/2293) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220727-dedicated-recovery-backend-for-rwx-volume-nfs-server.md)[[doc]](https://longhorn.io/docs/1.4.0/advanced-resources/rwx-workloads/)
|
|
||||||
RWX has been experimental from Longhorn v1.1.0, but it lacks availability support when the Longhorn Share Manager component behind becomes unavailable. Longhorn v1.4.0 supports NFS recovery backend based on Kubernetes built-in resource, ConfigMap, for recovering NFS client connection during the fail-over period. Also, the NFS client hard mode introduction will further avoid previous potential data loss. For the detail, please check the issue and enhancement proposal.
|
|
||||||
|
|
||||||
- [Volume Snapshot Checksum](https://github.com/longhorn/longhorn/issues/4210) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220922-snapshot-checksum-and-bit-rot-detection.md)[[doc]](https://longhorn.io/docs/1.4.0/references/settings/#snapshot-data-integrity)
|
|
||||||
Data integrity is a continuous effort for Longhorn. In this version, Snapshot Checksum has been introduced w/ some settings to allow users to enable or disable checksum calculation with different modes.
|
|
||||||
|
|
||||||
- [Volume Bit-rot Protection](https://github.com/longhorn/longhorn/issues/3198) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220922-snapshot-checksum-and-bit-rot-detection.md)[[doc]](https://longhorn.io/docs/1.4.0/references/settings/#snapshot-data-integrity)
|
|
||||||
When enabling the Volume Snapshot Checksum feature, Longhorn will periodically calculate and check the checksums of volume snapshots, find corrupted snapshots, then fix them.
|
|
||||||
|
|
||||||
- [Volume Replica Rebuilding Speedup](https://github.com/longhorn/longhorn/issues/4783)
|
|
||||||
When enabling the Volume Snapshot Checksum feature, Longhorn will use the calculated snapshot checksum to avoid needless snapshot replication between nodes for improving replica rebuilding speed and resource consumption.
|
|
||||||
|
|
||||||
- [Volume Trim](https://github.com/longhorn/longhorn/issues/836) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20221103-filesystem-trim.md)[[doc]](https://longhorn.io/docs/1.4.0/volumes-and-nodes/trim-filesystem/#trim-the-filesystem-in-a-longhorn-volume)
|
|
||||||
Longhorn engine supports UNMAP SCSI command to reclaim space from the block volume.
|
|
||||||
|
|
||||||
- [Online Volume Expansion](https://github.com/longhorn/longhorn/issues/1674) [[doc]](https://longhorn.io/docs/1.4.0/volumes-and-nodes/expansion)
|
|
||||||
Longhorn engine supports optional parameters to pass size expansion requests when updating the volume frontend to support online volume expansion and resize the filesystem via CSI node driver.
|
|
||||||
|
|
||||||
- [Local Volume via Data Locality Strict Mode](https://github.com/longhorn/longhorn/issues/3957) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20200819-keep-a-local-replica-to-engine.md)[[doc]](https://longhorn.io/docs/1.4.0/references/settings/#default-data-locality)
|
|
||||||
Local volume is based on a new Data Locality setting, Strict Local. It will allow users to create one replica volume staying in a consistent location, and the data transfer between the volume frontend and engine will be through a local socket instead of the TCP stack to improve performance and reduce resource consumption.
|
|
||||||
|
|
||||||
- [Volume Recurring Job Backup Restore](https://github.com/longhorn/longhorn/issues/2227) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20201002-allow-recurring-backup-detached-volumes.md)[[doc]](https://longhorn.io/docs/1.4.0/snapshots-and-backups/backup-and-restore/restore-recurring-jobs-from-a-backup/)
|
|
||||||
Recurring jobs binding to a volume can be backed up to the remote backup target together with the volume backup metadata. They can be restored back as well for a better operation experience.
|
|
||||||
|
|
||||||
- [Volume IO Metrics](https://github.com/longhorn/longhorn/issues/2406) [[doc]](https://longhorn.io/docs/1.4.0/monitoring/metrics/#volume)
|
|
||||||
Longhorn enriches Volume metrics by providing real-time IO stats including IOPS, latency, and throughput of R/W IO. Users can set up a monotoning solution like Prometheus to monitor volume performance.
|
|
||||||
|
|
||||||
- [Longhorn System Backup & Restore](https://github.com/longhorn/longhorn/issues/1455) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220913-longhorn-system-backup-restore.md)[[doc]](https://longhorn.io/docs/1.4.0/advanced-resources/system-backup-restore/)
|
|
||||||
Users can back up the longhorn system to the remote backup target. Afterward, it's able to restore back to an existing cluster in place or a new cluster for specific operational purposes.
|
|
||||||
|
|
||||||
- [Support Bundle Enhancement](https://github.com/longhorn/longhorn/issues/2759) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20221109-support-bundle-enhancement.md)
|
|
||||||
Longhorn introduces a new support bundle integration based on a general [support bundle kit](https://github.com/rancher/support-bundle-kit) solution. This can help us collect more complete troubleshooting info and simulate the cluster environment.
|
|
||||||
|
|
||||||
- [Tunable Timeout between Engine and Replica](https://github.com/longhorn/longhorn/issues/4491) [[doc]](https://longhorn.io/docs/1.4.0/references/settings/#engine-to-replica-timeout)
|
|
||||||
In the current Longhorn versions, the default timeout between the Longhorn engine and replica is fixed without any exposed user settings. This will potentially bring some challenges for users having a low-spec infra environment. By exporting the setting configurable, it will allow users adaptively tune the stability of volume operations.
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.4.0.**
|
|
||||||
|
|
||||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.0/deploy/install/).
|
|
||||||
|
|
||||||
## Upgrade
|
|
||||||
|
|
||||||
> **Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.0 from v1.3.x. Only support upgrading from 1.3.x.**
|
|
||||||
|
|
||||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.0/deploy/upgrade/).
|
|
||||||
|
|
||||||
## Deprecation & Incompatibilities
|
|
||||||
|
|
||||||
- Pod Security Policy is an opt-in setting. If installing Longhorn with PSP support, need to enable it first.
|
|
||||||
- The built-in CSI Snapshotter sidecar is upgraded to v5.0.1. The v1beta1 version of Volume Snapshot custom resource is deprecated but still supported. However, it will be removed after upgrading CSI Snapshotter to 6.1 or later versions in the future, so please start using v1 version instead before the deprecated version is removed.
|
|
||||||
|
|
||||||
## Known Issues after Release
|
|
||||||
|
|
||||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
|
||||||
|
|
||||||
## Highlights
|
|
||||||
|
|
||||||
- [FEATURE] Reclaim/Shrink space of volume ([836](https://github.com/longhorn/longhorn/issues/836)) - @yangchiu @derekbit @smallteeths @shuo-wu
|
|
||||||
- [FEATURE] Backup/Restore Longhorn System ([1455](https://github.com/longhorn/longhorn/issues/1455)) - @c3y1huang @khushboo-rancher
|
|
||||||
- [FEATURE] Online volume expansion ([1674](https://github.com/longhorn/longhorn/issues/1674)) - @shuo-wu @chriscchien
|
|
||||||
- [FEATURE] Record recurring schedule in the backups and allow user choose to use it for the restored volume ([2227](https://github.com/longhorn/longhorn/issues/2227)) - @yangchiu @mantissahz
|
|
||||||
- [FEATURE] NFS support (RWX) GA ([2293](https://github.com/longhorn/longhorn/issues/2293)) - @derekbit @chriscchien
|
|
||||||
- [FEATURE] Support metrics for Volume IOPS, throughput and latency real time ([2406](https://github.com/longhorn/longhorn/issues/2406)) - @derekbit @roger-ryao
|
|
||||||
- [FEATURE] Support bundle enhancement ([2759](https://github.com/longhorn/longhorn/issues/2759)) - @c3y1huang @chriscchien
|
|
||||||
- [FEATURE] Automatic identifying of corrupted replica (bit rot detection) ([3198](https://github.com/longhorn/longhorn/issues/3198)) - @yangchiu @derekbit
|
|
||||||
- [FEATURE] Local volume for distributed data workloads ([3957](https://github.com/longhorn/longhorn/issues/3957)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] Support K8s 1.25 by updating removed deprecated resource versions like PodSecurityPolicy ([4003](https://github.com/longhorn/longhorn/issues/4003)) - @PhanLe1010 @chriscchien
|
|
||||||
- [IMPROVEMENT] Faster resync time for fresh replica rebuilding ([4092](https://github.com/longhorn/longhorn/issues/4092)) - @yangchiu @derekbit
|
|
||||||
- [FEATURE] Introduce checksum for snapshots ([4210](https://github.com/longhorn/longhorn/issues/4210)) - @derekbit @roger-ryao
|
|
||||||
- [FEATURE] Update K8s version support and component/pkg/build dependencies ([4239](https://github.com/longhorn/longhorn/issues/4239)) - @yangchiu @PhanLe1010
|
|
||||||
- [BUG] data corruption due to COW and block size not being aligned during rebuilding replicas ([4354](https://github.com/longhorn/longhorn/issues/4354)) - @PhanLe1010 @chriscchien
|
|
||||||
- [IMPROVEMENT] Adjust the iSCSI timeout and the engine-to-replica timeout settings ([4491](https://github.com/longhorn/longhorn/issues/4491)) - @yangchiu @derekbit
|
|
||||||
- [IMPROVEMENT] Using specific block size in Longhorn volume's filesystem ([4594](https://github.com/longhorn/longhorn/issues/4594)) - @derekbit @roger-ryao
|
|
||||||
- [IMPROVEMENT] Speed up replica rebuilding by the metadata such as ctime of snapshot disk files ([4783](https://github.com/longhorn/longhorn/issues/4783)) - @yangchiu @derekbit
|
|
||||||
|
|
||||||
## Enhancements
|
|
||||||
|
|
||||||
- [FEATURE] Configure successfulJobsHistoryLimit of CronJobs ([1711](https://github.com/longhorn/longhorn/issues/1711)) - @weizhe0422 @chriscchien
|
|
||||||
- [FEATURE] Allow customization of the cipher used by cryptsetup in volume encryption ([3353](https://github.com/longhorn/longhorn/issues/3353)) - @mantissahz @chriscchien
|
|
||||||
- [FEATURE] New setting to limit the concurrent volume restoring from backup ([4558](https://github.com/longhorn/longhorn/issues/4558)) - @c3y1huang @chriscchien
|
|
||||||
- [FEATURE] Make FS format options configurable in storage class ([4642](https://github.com/longhorn/longhorn/issues/4642)) - @weizhe0422 @chriscchien
|
|
||||||
|
|
||||||
## Improvement
|
|
||||||
|
|
||||||
- [IMPROVEMENT] Change the script into a docker run command mentioned in 'recovery from longhorn backup without system installed' doc ([1521](https://github.com/longhorn/longhorn/issues/1521)) - @weizhe0422 @chriscchien
|
|
||||||
- [IMPROVEMENT] Improve 'recovery from longhorn backup without system installed' doc. ([1522](https://github.com/longhorn/longhorn/issues/1522)) - @weizhe0422 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Dump NFS ganesha logs to pod stdout ([2380](https://github.com/longhorn/longhorn/issues/2380)) - @weizhe0422 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Support failed/obsolete orphaned backup cleanup ([3898](https://github.com/longhorn/longhorn/issues/3898)) - @mantissahz @chriscchien
|
|
||||||
- [IMPROVEMENT] liveness and readiness probes with longhorn csi plugin daemonset ([3907](https://github.com/longhorn/longhorn/issues/3907)) - @c3y1huang @roger-ryao
|
|
||||||
- [IMPROVEMENT] Longhorn doesn't reuse failed replica on a disk with full allocated space ([3921](https://github.com/longhorn/longhorn/issues/3921)) - @PhanLe1010 @chriscchien
|
|
||||||
- [IMPROVEMENT] Reduce syscalls while reading and writing requests in longhorn-engine (engine <-> replica) ([4122](https://github.com/longhorn/longhorn/issues/4122)) - @yangchiu @derekbit
|
|
||||||
- [IMPROVEMENT] Reduce read and write calls in liblonghorn (tgt <-> engine) ([4133](https://github.com/longhorn/longhorn/issues/4133)) - @derekbit
|
|
||||||
- [IMPROVEMENT] Replace the GCC allocator in liblonghorn with a more efficient memory allocator ([4136](https://github.com/longhorn/longhorn/issues/4136)) - @yangchiu @derekbit
|
|
||||||
- [DOC] Update Helm readme and document ([4175](https://github.com/longhorn/longhorn/issues/4175)) - @derekbit
|
|
||||||
- [IMPROVEMENT] Purging a volume before rebuilding starts ([4183](https://github.com/longhorn/longhorn/issues/4183)) - @yangchiu @shuo-wu
|
|
||||||
- [IMPROVEMENT] Schedule volumes based on available disk space ([4185](https://github.com/longhorn/longhorn/issues/4185)) - @yangchiu @c3y1huang
|
|
||||||
- [IMPROVEMENT] Recognize default toleration and node selector to allow Longhorn run on the RKE mixed cluster ([4246](https://github.com/longhorn/longhorn/issues/4246)) - @c3y1huang @chriscchien
|
|
||||||
- [IMPROVEMENT] Support bundle doesn't collect the snapshot yamls ([4285](https://github.com/longhorn/longhorn/issues/4285)) - @yangchiu @PhanLe1010
|
|
||||||
- [IMPROVEMENT] Avoid accidentally deleting engine images that are still in use ([4332](https://github.com/longhorn/longhorn/issues/4332)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] Show non-JSON error from backup store ([4336](https://github.com/longhorn/longhorn/issues/4336)) - @c3y1huang
|
|
||||||
- [IMPROVEMENT] Update nfs-ganesha to v4.0 ([4351](https://github.com/longhorn/longhorn/issues/4351)) - @derekbit
|
|
||||||
- [IMPROVEMENT] show error when failed to init frontend ([4362](https://github.com/longhorn/longhorn/issues/4362)) - @c3y1huang
|
|
||||||
- [IMPROVEMENT] Too many debug-level log messages in engine instance-manager ([4427](https://github.com/longhorn/longhorn/issues/4427)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] Add prep work for fixing the corrupted filesystem using fsck in KB ([4440](https://github.com/longhorn/longhorn/issues/4440)) - @derekbit
|
|
||||||
- [IMPROVEMENT] Prevent users from accidentally uninstalling Longhorn ([4509](https://github.com/longhorn/longhorn/issues/4509)) - @yangchiu @PhanLe1010
|
|
||||||
- [IMPROVEMENT] add possibility to use nodeSelector on the storageClass ([4574](https://github.com/longhorn/longhorn/issues/4574)) - @weizhe0422 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Check if node schedulable condition is set before trying to read it ([4581](https://github.com/longhorn/longhorn/issues/4581)) - @weizhe0422 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Review/consolidate the sectorSize in replica server, replica volume, and engine ([4599](https://github.com/longhorn/longhorn/issues/4599)) - @yangchiu @derekbit
|
|
||||||
- [IMPROVEMENT] Reorganize longhorn-manager/k8s/patches and auto-generate preserveUnknownFields field ([4600](https://github.com/longhorn/longhorn/issues/4600)) - @yangchiu @derekbit
|
|
||||||
- [IMPROVEMENT] share-manager pod bypasses the kubernetes scheduler ([4789](https://github.com/longhorn/longhorn/issues/4789)) - @joshimoo @chriscchien
|
|
||||||
- [IMPROVEMENT] Unify the format of returned error messages in longhorn-engine ([4828](https://github.com/longhorn/longhorn/issues/4828)) - @derekbit
|
|
||||||
- [IMPROVEMENT] Longhorn system backup/restore UI ([4855](https://github.com/longhorn/longhorn/issues/4855)) - @smallteeths
|
|
||||||
- [IMPROVEMENT] Replace the modTime (mtime) with ctime in snapshot hash ([4934](https://github.com/longhorn/longhorn/issues/4934)) - @derekbit @chriscchien
|
|
||||||
- [BUG] volume is stuck in attaching/detaching loop with error `Failed to init frontend: device...` ([4959](https://github.com/longhorn/longhorn/issues/4959)) - @derekbit @PhanLe1010 @chriscchien
|
|
||||||
- [IMPROVEMENT] Affinity in the longhorn-ui deployment within the helm chart ([4987](https://github.com/longhorn/longhorn/issues/4987)) - @mantissahz @chriscchien
|
|
||||||
- [IMPROVEMENT] Allow users to change volume.spec.snapshotDataIntegrity on UI ([4994](https://github.com/longhorn/longhorn/issues/4994)) - @yangchiu @smallteeths
|
|
||||||
- [IMPROVEMENT] Backup and restore recurring jobs on UI ([5009](https://github.com/longhorn/longhorn/issues/5009)) - @smallteeths @chriscchien
|
|
||||||
- [IMPROVEMENT] Disable `Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly` for RWX volumes ([5017](https://github.com/longhorn/longhorn/issues/5017)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] Enable fast replica rebuilding by default ([5023](https://github.com/longhorn/longhorn/issues/5023)) - @derekbit @roger-ryao
|
|
||||||
- [IMPROVEMENT] Upgrade tcmalloc in longhorn-engine ([5050](https://github.com/longhorn/longhorn/issues/5050)) - @derekbit
|
|
||||||
- [IMPROVEMENT] UI show error when backup target is empty for system backup ([5056](https://github.com/longhorn/longhorn/issues/5056)) - @smallteeths @khushboo-rancher
|
|
||||||
- [IMPROVEMENT] System restore job name should be Longhorn prefixed ([5057](https://github.com/longhorn/longhorn/issues/5057)) - @c3y1huang @khushboo-rancher
|
|
||||||
- [BUG] Error in logs while restoring the system backup ([5061](https://github.com/longhorn/longhorn/issues/5061)) - @c3y1huang @chriscchien
|
|
||||||
- [IMPROVEMENT] Add warning message to when deleting the restoring backups ([5065](https://github.com/longhorn/longhorn/issues/5065)) - @smallteeths @khushboo-rancher @roger-ryao
|
|
||||||
- [IMPROVEMENT] Inconsistent name convention across volume backup restore and system backup restore ([5066](https://github.com/longhorn/longhorn/issues/5066)) - @smallteeths @roger-ryao
|
|
||||||
- [IMPROVEMENT] System restore should proceed to restore other volumes if restoring one volume keeps failing for a certain time. ([5086](https://github.com/longhorn/longhorn/issues/5086)) - @c3y1huang @khushboo-rancher @roger-ryao
|
|
||||||
- [IMPROVEMENT] Support customized number of replicas of webhook and recovery-backend ([5087](https://github.com/longhorn/longhorn/issues/5087)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] Simplify the page by placing some configuration items in the advanced configuration when creating the volume ([5090](https://github.com/longhorn/longhorn/issues/5090)) - @yangchiu @smallteeths
|
|
||||||
- [IMPROVEMENT] Support replica sync client timeout setting to stabilize replica rebuilding ([5110](https://github.com/longhorn/longhorn/issues/5110)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] Set a newly created volume's data integrity from UI to `ignored` rather than `Fast-Check`. ([5126](https://github.com/longhorn/longhorn/issues/5126)) - @yangchiu @smallteeths
|
|
||||||
|
|
||||||
## Performance
|
|
||||||
|
|
||||||
- [BUG] Turn a node down and up, workload takes longer time to come back online in Longhorn v1.2.0 ([2947](https://github.com/longhorn/longhorn/issues/2947)) - @yangchiu @PhanLe1010
|
|
||||||
- [TASK] RWX volume performance measurement and investigation ([3665](https://github.com/longhorn/longhorn/issues/3665)) - @derekbit
|
|
||||||
- [TASK] Verify spinning disk/HDD via the current e2e regression ([4182](https://github.com/longhorn/longhorn/issues/4182)) - @yangchiu
|
|
||||||
- [BUG] test_csi_snapshot_snap_create_volume_from_snapshot failed when using HDD as Longhorn disks ([4227](https://github.com/longhorn/longhorn/issues/4227)) - @yangchiu @PhanLe1010
|
|
||||||
- [TASK] Disable tcmalloc in data path because newer tcmalloc version leads to performance drop ([5096](https://github.com/longhorn/longhorn/issues/5096)) - @derekbit @chriscchien
|
|
||||||
|
|
||||||
## Stability
|
|
||||||
|
|
||||||
- [BUG] Longhorn won't fail all replicas if there is no valid backend during the engine starting stage ([1330](https://github.com/longhorn/longhorn/issues/1330)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Every other backup fails and crashes the volume (Segmentation Fault) ([1768](https://github.com/longhorn/longhorn/issues/1768)) - @olljanat @mantissahz
|
|
||||||
- [BUG] Backend sizes do not match 5368709120 != 10737418240 in the engine initiation phase ([3601](https://github.com/longhorn/longhorn/issues/3601)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Somehow the Rebuilding field inside volume.meta is set to true causing the volume to stuck in attaching/detaching loop ([4212](https://github.com/longhorn/longhorn/issues/4212)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Engine binary cannot be recovered after being removed accidentally ([4380](https://github.com/longhorn/longhorn/issues/4380)) - @yangchiu @c3y1huang
|
|
||||||
- [TASK] Disable tcmalloc in longhorn-engine and longhorn-instance-manager ([5068](https://github.com/longhorn/longhorn/issues/5068)) - @derekbit
|
|
||||||
|
|
||||||
## Bugs
|
|
||||||
|
|
||||||
- [BUG] Removing old instance records after the new IM pod is launched will take 1 minute ([1363](https://github.com/longhorn/longhorn/issues/1363)) - @mantissahz
|
|
||||||
- [BUG] Restoring volume stuck forever if the backup is already deleted. ([1867](https://github.com/longhorn/longhorn/issues/1867)) - @mantissahz @chriscchien
|
|
||||||
- [BUG] Duplicated default instance manager leads to engine/replica cannot be started ([3000](https://github.com/longhorn/longhorn/issues/3000)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [BUG] Restore from backup sometimes failed if having high frequent recurring backup job w/ retention ([3055](https://github.com/longhorn/longhorn/issues/3055)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] Newly created backup stays in `InProgress` when the volume deleted before backup finished ([3122](https://github.com/longhorn/longhorn/issues/3122)) - @mantissahz @chriscchien
|
|
||||||
- [Bug] Degraded volume generate failed replica make volume unschedulable ([3220](https://github.com/longhorn/longhorn/issues/3220)) - @derekbit @chriscchien
|
|
||||||
- [BUG] The default access mode of a restored RWX volume is RWO ([3444](https://github.com/longhorn/longhorn/issues/3444)) - @weizhe0422 @roger-ryao
|
|
||||||
- [BUG] Replica rebuilding failure with error "Replica must be closed, Can not add in state: open" ([3828](https://github.com/longhorn/longhorn/issues/3828)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] Max length of volume name not consist between frontend and backend ([3917](https://github.com/longhorn/longhorn/issues/3917)) - @weizhe0422 @roger-ryao
|
|
||||||
- [BUG] Can't delete volumesnapshot if backup removed first ([4107](https://github.com/longhorn/longhorn/issues/4107)) - @weizhe0422 @chriscchien
|
|
||||||
- [BUG] A IM-proxy connection not closed in full regression 1.3 ([4113](https://github.com/longhorn/longhorn/issues/4113)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Scale replica warning ([4120](https://github.com/longhorn/longhorn/issues/4120)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Wrong nodeOrDiskEvicted collected in node monitor ([4143](https://github.com/longhorn/longhorn/issues/4143)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Misleading log "BUG: replica is running but storage IP is empty" ([4153](https://github.com/longhorn/longhorn/issues/4153)) - @shuo-wu @chriscchien
|
|
||||||
- [BUG] longhorn-manager cannot start while upgrading if the configmap contains volume sensitive settings ([4160](https://github.com/longhorn/longhorn/issues/4160)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Replica stuck in buggy state with status.currentState is error and the spec.desireState is running ([4197](https://github.com/longhorn/longhorn/issues/4197)) - @yangchiu @PhanLe1010
|
|
||||||
- [BUG] After updating longhorn to version 1.3.0, only 1 node had problems and I can't even delete it ([4213](https://github.com/longhorn/longhorn/issues/4213)) - @derekbit @c3y1huang @chriscchien
|
|
||||||
- [BUG] Unable to use a TTY error when running environment_check.sh ([4216](https://github.com/longhorn/longhorn/issues/4216)) - @flkdnt @chriscchien
|
|
||||||
- [BUG] The last healthy replica may be evicted or removed ([4238](https://github.com/longhorn/longhorn/issues/4238)) - @yangchiu @shuo-wu
|
|
||||||
- [BUG] Volume detaching and attaching repeatedly while creating multiple snapshots with a same id ([4250](https://github.com/longhorn/longhorn/issues/4250)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Backing image is not deleted and recreated correctly ([4256](https://github.com/longhorn/longhorn/issues/4256)) - @shuo-wu @chriscchien
|
|
||||||
- [BUG] longhorn-ui fails to start on RKE2 with cis-1.6 profile for Longhorn v1.3.0 with helm install ([4266](https://github.com/longhorn/longhorn/issues/4266)) - @yangchiu @mantissahz
|
|
||||||
- [BUG] Longhorn volume stuck in deleting state ([4278](https://github.com/longhorn/longhorn/issues/4278)) - @yangchiu @PhanLe1010
|
|
||||||
- [BUG] the IP address is duplicate when using storage network and the second network is contronllerd by ovs-cni. ([4281](https://github.com/longhorn/longhorn/issues/4281)) - @mantissahz
|
|
||||||
- [BUG] build longhorn-ui image error ([4283](https://github.com/longhorn/longhorn/issues/4283)) - @smallteeths
|
|
||||||
- [BUG] Wrong conditions in the Chart default-setting manifest for Rancher deployed Windows Cluster feature ([4289](https://github.com/longhorn/longhorn/issues/4289)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Volume operations/rebuilding error during eviction ([4294](https://github.com/longhorn/longhorn/issues/4294)) - @yangchiu @shuo-wu
|
|
||||||
- [BUG] longhorn-manager deletes same pod multi times when rebooting ([4302](https://github.com/longhorn/longhorn/issues/4302)) - @mantissahz @w13915984028
|
|
||||||
- [BUG] test_setting_backing_image_auto_cleanup failed because the backing image file isn't deleted on the corresponding node as expected ([4308](https://github.com/longhorn/longhorn/issues/4308)) - @shuo-wu @chriscchien
|
|
||||||
- [BUG] After automatically force delete terminating pods of deployment on down node, data lost and I/O error ([4384](https://github.com/longhorn/longhorn/issues/4384)) - @yangchiu @derekbit @PhanLe1010
|
|
||||||
- [BUG] Volume can not attach to node when engine image DaemonSet pods are not fully deployed ([4386](https://github.com/longhorn/longhorn/issues/4386)) - @PhanLe1010 @chriscchien
|
|
||||||
- [BUG] Error/warning during uninstallation of Longhorn v1.3.1 via manifest ([4405](https://github.com/longhorn/longhorn/issues/4405)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [BUG] can't upgrade engine if a volume was created in Longhorn v1.0 and the volume.spec.dataLocality is `""` ([4412](https://github.com/longhorn/longhorn/issues/4412)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Confusing description the label for replica delition ([4430](https://github.com/longhorn/longhorn/issues/4430)) - @yangchiu @smallteeths
|
|
||||||
- [BUG] Update the Longhorn document in Using the Environment Check Script ([4450](https://github.com/longhorn/longhorn/issues/4450)) - @weizhe0422 @roger-ryao
|
|
||||||
- [BUG] Unable to search 1.3.1 doc by algolia ([4457](https://github.com/longhorn/longhorn/issues/4457)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] Misleading message "The volume is in expansion progress from size 20Gi to 10Gi" if the expansion is invalid ([4475](https://github.com/longhorn/longhorn/issues/4475)) - @yangchiu @smallteeths
|
|
||||||
- [BUG] Flaky case test_autosalvage_with_data_locality_enabled ([4489](https://github.com/longhorn/longhorn/issues/4489)) - @weizhe0422
|
|
||||||
- [BUG] Continuously rebuild when auto-balance==least-effort and existing node becomes unschedulable ([4502](https://github.com/longhorn/longhorn/issues/4502)) - @yangchiu @c3y1huang
|
|
||||||
- [BUG] Inconsistent system snapshots between replicas after rebuilding ([4513](https://github.com/longhorn/longhorn/issues/4513)) - @derekbit
|
|
||||||
- [BUG] Prometheus metric for backup state (longhorn_backup_state) returns wrong values ([4521](https://github.com/longhorn/longhorn/issues/4521)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] Longhorn accidentally schedule all replicas onto a worker node even though the setting Replica Node Level Soft Anti-Affinity is currently disabled ([4546](https://github.com/longhorn/longhorn/issues/4546)) - @yangchiu @mantissahz
|
|
||||||
- [BUG] LH continuously reports `invalid customized default setting taint-toleration` ([4554](https://github.com/longhorn/longhorn/issues/4554)) - @weizhe0422 @roger-ryao
|
|
||||||
- [BUG] the values.yaml in the longhorn helm chart contains values not used. ([4601](https://github.com/longhorn/longhorn/issues/4601)) - @weizhe0422 @roger-ryao
|
|
||||||
- [BUG] longhorn-engine integration test test_restore_to_file_with_backing_file failed after upgrade to sles 15.4 ([4632](https://github.com/longhorn/longhorn/issues/4632)) - @mantissahz
|
|
||||||
- [BUG] Can not pull a backup created by another Longhorn system from the remote backup target ([4637](https://github.com/longhorn/longhorn/issues/4637)) - @yangchiu @mantissahz @roger-ryao
|
|
||||||
- [BUG] Fix the share-manager deletion failure if the confimap is not existing ([4648](https://github.com/longhorn/longhorn/issues/4648)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Updating volume-scheduling-error failure for RWX volumes and expanding volumes ([4654](https://github.com/longhorn/longhorn/issues/4654)) - @derekbit @chriscchien
|
|
||||||
- [BUG] charts/longhorn/questions.yaml include oudated csi-image tags ([4669](https://github.com/longhorn/longhorn/issues/4669)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [BUG] rebuilding the replica failed after upgrading from 1.2.4 to 1.3.2-rc2 ([4705](https://github.com/longhorn/longhorn/issues/4705)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Cannot re-run helm uninstallation if the first one failed and cannot fetch logs of failed uninstallation pod ([4711](https://github.com/longhorn/longhorn/issues/4711)) - @yangchiu @PhanLe1010 @roger-ryao
|
|
||||||
- [BUG] The old instance-manager-r Pods are not deleted after upgrade ([4726](https://github.com/longhorn/longhorn/issues/4726)) - @mantissahz @chriscchien
|
|
||||||
- [BUG] Replica Auto Balance repeatedly delete the local replica and trigger rebuilding ([4761](https://github.com/longhorn/longhorn/issues/4761)) - @c3y1huang @roger-ryao
|
|
||||||
- [BUG] Volume metafile getting deleted or empty results in a detach-attach loop ([4846](https://github.com/longhorn/longhorn/issues/4846)) - @mantissahz @chriscchien
|
|
||||||
- [BUG] Backing image is stuck at `in-progress` status if the provided checksum is incorrect ([4852](https://github.com/longhorn/longhorn/issues/4852)) - @FrankYang0529 @chriscchien
|
|
||||||
- [BUG] Duplicate channel close error in the backing image manage related components ([4865](https://github.com/longhorn/longhorn/issues/4865)) - @weizhe0422 @roger-ryao
|
|
||||||
- [BUG] The node ID of backing image data source somehow get changed then lead to file handling failed ([4887](https://github.com/longhorn/longhorn/issues/4887)) - @shuo-wu @chriscchien
|
|
||||||
- [BUG] Cannot upload a backing image larger than 10G ([4902](https://github.com/longhorn/longhorn/issues/4902)) - @smallteeths @shuo-wu @chriscchien
|
|
||||||
- [BUG] Failed to build longhorn-instance-manager master branch ([4946](https://github.com/longhorn/longhorn/issues/4946)) - @derekbit
|
|
||||||
- [BUG] PVC only works with plural annotation `volumes.kubernetes.io/storage-provisioner: driver.longhorn.io` ([4951](https://github.com/longhorn/longhorn/issues/4951)) - @weizhe0422
|
|
||||||
- [BUG] Failed to create a replenished replica process because of the newly adding option ([4962](https://github.com/longhorn/longhorn/issues/4962)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Incorrect log messages in longhorn-engine processRemoveSnapshot() ([4980](https://github.com/longhorn/longhorn/issues/4980)) - @derekbit
|
|
||||||
- [BUG] System backup showing wrong age ([5047](https://github.com/longhorn/longhorn/issues/5047)) - @smallteeths @khushboo-rancher
|
|
||||||
- [BUG] System backup should validate empty backup target ([5055](https://github.com/longhorn/longhorn/issues/5055)) - @c3y1huang @khushboo-rancher
|
|
||||||
- [BUG] missing the `restoreVolumeRecurringJob` parameter in the VolumeGet API ([5062](https://github.com/longhorn/longhorn/issues/5062)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] System restore stuck in restoring if pvc exists with identical name ([5064](https://github.com/longhorn/longhorn/issues/5064)) - @c3y1huang @roger-ryao
|
|
||||||
- [BUG] No error shown on UI if system backup conf not available ([5072](https://github.com/longhorn/longhorn/issues/5072)) - @c3y1huang @khushboo-rancher
|
|
||||||
- [BUG] System restore missing services ([5074](https://github.com/longhorn/longhorn/issues/5074)) - @yangchiu @c3y1huang
|
|
||||||
- [BUG] In a system restore, PV & PVC are not restored if PVC was created with 'longhorn-static' (created via Longhorn GUI) ([5091](https://github.com/longhorn/longhorn/issues/5091)) - @c3y1huang @khushboo-rancher
|
|
||||||
- [BUG][v1.4.0-rc1] image security scan CRITICAL issues ([5107](https://github.com/longhorn/longhorn/issues/5107)) - @yangchiu @mantissahz
|
|
||||||
- [BUG] Snapshot trim wrong label in the volume detail page. ([5127](https://github.com/longhorn/longhorn/issues/5127)) - @smallteeths @chriscchien
|
|
||||||
- [BUG] Filesystem on the volume with a backing image is corrupted after applying trim operation ([5129](https://github.com/longhorn/longhorn/issues/5129)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Error in uninstall job ([5132](https://github.com/longhorn/longhorn/issues/5132)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Uninstall job unable to delete the systembackup and systemrestore cr. ([5133](https://github.com/longhorn/longhorn/issues/5133)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Nil pointer dereference error on restoring the system backup ([5134](https://github.com/longhorn/longhorn/issues/5134)) - @yangchiu @c3y1huang
|
|
||||||
- [BUG] UI option Update Replicas Auto Balance should use capital letter like others ([5154](https://github.com/longhorn/longhorn/issues/5154)) - @smallteeths @chriscchien
|
|
||||||
- [BUG] System restore cannot roll out when volume name is different to the PV ([5157](https://github.com/longhorn/longhorn/issues/5157)) - @yangchiu @c3y1huang
|
|
||||||
- [BUG] Online expansion doesn't succeed after a failed expansion ([5169](https://github.com/longhorn/longhorn/issues/5169)) - @derekbit @shuo-wu @khushboo-rancher
|
|
||||||
|
|
||||||
## Misc
|
|
||||||
|
|
||||||
- [DOC] RWX support for NVIDIA JETSON Ubuntu 18.4LTS kernel requires enabling NFSV4.1 ([3157](https://github.com/longhorn/longhorn/issues/3157)) - @yangchiu @derekbit
|
|
||||||
- [DOC] Add information about encryption algorithm to documentation ([3285](https://github.com/longhorn/longhorn/issues/3285)) - @mantissahz
|
|
||||||
- [DOC] Update the doc of volume size after introducing snapshot prune ([4158](https://github.com/longhorn/longhorn/issues/4158)) - @shuo-wu
|
|
||||||
- [Doc] Update the outdated "Customizing Default Settings" document ([4174](https://github.com/longhorn/longhorn/issues/4174)) - @derekbit
|
|
||||||
- [TASK] Refresh distro version support for 1.4 ([4401](https://github.com/longhorn/longhorn/issues/4401)) - @weizhe0422
|
|
||||||
- [TASK] Update official document Longhorn Networking ([4478](https://github.com/longhorn/longhorn/issues/4478)) - @derekbit
|
|
||||||
- [TASK] Update preserveUnknownFields fields in longhorn-manager CRD manifest ([4505](https://github.com/longhorn/longhorn/issues/4505)) - @derekbit @roger-ryao
|
|
||||||
- [TASK] Disable doc search for archived versions < 1.1 ([4524](https://github.com/longhorn/longhorn/issues/4524)) - @mantissahz
|
|
||||||
- [TASK] Update longhorn components with the latest backupstore ([4552](https://github.com/longhorn/longhorn/issues/4552)) - @derekbit
|
|
||||||
- [TASK] Update base image of all components from BCI 15.3 to 15.4 ([4617](https://github.com/longhorn/longhorn/issues/4617)) - @yangchiu
|
|
||||||
- [DOC] Update the Longhorn document in Install with Helm ([4745](https://github.com/longhorn/longhorn/issues/4745)) - @roger-ryao
|
|
||||||
- [TASK] Create longhornio support-bundle-kit image ([4911](https://github.com/longhorn/longhorn/issues/4911)) - @yangchiu
|
|
||||||
- [DOC] Add Recurring * Jobs History Limit to setting reference ([4912](https://github.com/longhorn/longhorn/issues/4912)) - @weizhe0422 @roger-ryao
|
|
||||||
- [DOC] Add Failed Backup TTL to setting reference ([4913](https://github.com/longhorn/longhorn/issues/4913)) - @mantissahz
|
|
||||||
- [TASK] Create longhornio liveness probe image ([4945](https://github.com/longhorn/longhorn/issues/4945)) - @yangchiu
|
|
||||||
- [TASK] Make system managed components branch-based build ([5024](https://github.com/longhorn/longhorn/issues/5024)) - @yangchiu
|
|
||||||
- [TASK] Remove unstable s390x from PR check for all repos ([5040](https://github.com/longhorn/longhorn/issues/5040)) -
|
|
||||||
- [TASK] Update longhorn-share-manager's nfs-ganesha to V4.2.1 ([5083](https://github.com/longhorn/longhorn/issues/5083)) - @derekbit @mantissahz
|
|
||||||
- [DOC] Update the Longhorn document in Setting up Prometheus and Grafana ([5158](https://github.com/longhorn/longhorn/issues/5158)) - @roger-ryao
|
|
||||||
|
|
||||||
## Contributors
|
|
||||||
|
|
||||||
- @FrankYang0529
|
|
||||||
- @PhanLe1010
|
|
||||||
- @c3y1huang
|
|
||||||
- @chriscchien
|
|
||||||
- @derekbit
|
|
||||||
- @flkdnt
|
|
||||||
- @innobead
|
|
||||||
- @joshimoo
|
|
||||||
- @khushboo-rancher
|
|
||||||
- @mantissahz
|
|
||||||
- @olljanat
|
|
||||||
- @roger-ryao
|
|
||||||
- @shuo-wu
|
|
||||||
- @smallteeths
|
|
||||||
- @w13915984028
|
|
||||||
- @weizhe0422
|
|
||||||
- @yangchiu
|
|
@ -1,88 +0,0 @@
|
|||||||
## Release Note
|
|
||||||
**v1.4.1 released!** 🎆
|
|
||||||
|
|
||||||
This release introduces improvements and bug fixes as described below about stability, performance, space efficiency, resilience, and so on. Please try it and feedback. Thanks for all the contributions!
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.4.1.**
|
|
||||||
|
|
||||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.1/deploy/install/).
|
|
||||||
|
|
||||||
## Upgrade
|
|
||||||
|
|
||||||
> **Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.1 from v1.3.x/v1.4.0, which are only supported source versions.**
|
|
||||||
|
|
||||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.1/deploy/upgrade/).
|
|
||||||
|
|
||||||
## Deprecation & Incompatibilities
|
|
||||||
|
|
||||||
N/A
|
|
||||||
|
|
||||||
## Known Issues after Release
|
|
||||||
|
|
||||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
|
||||||
|
|
||||||
|
|
||||||
## Highlights
|
|
||||||
|
|
||||||
- [IMPROVEMENT] Periodically clean up volume snapshots ([3836](https://github.com/longhorn/longhorn/issues/3836)) - @c3y1huang @chriscchien
|
|
||||||
|
|
||||||
## Improvement
|
|
||||||
|
|
||||||
- [IMPROVEMENT] Do not count the failure replica reuse failure caused by the disconnection ([1923](https://github.com/longhorn/longhorn/issues/1923)) - @yangchiu @mantissahz
|
|
||||||
- [IMPROVEMENT] Update uninstallation info to include the 'Deleting Confirmation Flag' in chart ([5250](https://github.com/longhorn/longhorn/issues/5250)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Fix Guaranteed Engine Manager CPU recommendation formula in UI ([5338](https://github.com/longhorn/longhorn/issues/5338)) - @c3y1huang @smallteeths @roger-ryao
|
|
||||||
- [IMPROVEMENT] Update PSP validation in the Longhorn upstream chart ([5339](https://github.com/longhorn/longhorn/issues/5339)) - @yangchiu @PhanLe1010
|
|
||||||
- [IMPROVEMENT] Update ganesha nfs to 4.2.3 ([5356](https://github.com/longhorn/longhorn/issues/5356)) - @derekbit @roger-ryao
|
|
||||||
- [IMPROVEMENT] Set write-cache of longhorn block device to off explicitly ([5382](https://github.com/longhorn/longhorn/issues/5382)) - @derekbit @chriscchien
|
|
||||||
|
|
||||||
## Stability
|
|
||||||
|
|
||||||
- [BUG] Memory leak in CSI plugin caused by stuck umount processes if the RWX volume is already gone ([5296](https://github.com/longhorn/longhorn/issues/5296)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] share-manager pod failed to restart after kubelet restart ([5507](https://github.com/longhorn/longhorn/issues/5507)) - @yangchiu @derekbit
|
|
||||||
|
|
||||||
## Bugs
|
|
||||||
|
|
||||||
- [BUG] Longhorn 1.3.2 fails to backup & restore volumes behind Internet proxy ([5054](https://github.com/longhorn/longhorn/issues/5054)) - @mantissahz @chriscchien
|
|
||||||
- [BUG] RWX doesn't work with release 1.4.0 due to end grace update error from recovery backend ([5183](https://github.com/longhorn/longhorn/issues/5183)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Incorrect indentation of charts/questions.yaml ([5196](https://github.com/longhorn/longhorn/issues/5196)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] Updating option "Allow snapshots removal during trim" for old volumes failed ([5218](https://github.com/longhorn/longhorn/issues/5218)) - @shuo-wu @roger-ryao
|
|
||||||
- [BUG] Incorrect router retry mechanism ([5259](https://github.com/longhorn/longhorn/issues/5259)) - @mantissahz @chriscchien
|
|
||||||
- [BUG] System Backup is stuck at Uploading if there are PVs not provisioned by CSI driver ([5286](https://github.com/longhorn/longhorn/issues/5286)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Sync up with backup target during DR volume activation ([5292](https://github.com/longhorn/longhorn/issues/5292)) - @yangchiu @weizhe0422
|
|
||||||
- [BUG] environment_check.sh does not handle different kernel versions in cluster correctly ([5304](https://github.com/longhorn/longhorn/issues/5304)) - @achims311 @roger-ryao
|
|
||||||
- [BUG] instance-manager-r high memory consumption ([5312](https://github.com/longhorn/longhorn/issues/5312)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Replica rebuilding caused by rke2/kubelet restart ([5340](https://github.com/longhorn/longhorn/issues/5340)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Error message not consistent between create/update recurring job when retain number greater than 50 ([5434](https://github.com/longhorn/longhorn/issues/5434)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Do not copy Host header to API requests forwarded to Longhorn Manager ([5438](https://github.com/longhorn/longhorn/issues/5438)) - @yangchiu @smallteeths
|
|
||||||
- [BUG] RWX Volume attachment is getting Failed ([5456](https://github.com/longhorn/longhorn/issues/5456)) - @derekbit
|
|
||||||
- [BUG] test case test_backup_lock_deletion_during_restoration failed ([5458](https://github.com/longhorn/longhorn/issues/5458)) - @yangchiu @derekbit
|
|
||||||
- [BUG] [master] [v1.4.1-rc1] Volume restoration will never complete if attached node is down ([5464](https://github.com/longhorn/longhorn/issues/5464)) - @derekbit @weizhe0422 @chriscchien
|
|
||||||
- [BUG] Unable to create support bundle agent pod in air-gap environment ([5467](https://github.com/longhorn/longhorn/issues/5467)) - @yangchiu @c3y1huang
|
|
||||||
- [BUG] Node disconnection test failed ([5476](https://github.com/longhorn/longhorn/issues/5476)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Physical node down test failed ([5477](https://github.com/longhorn/longhorn/issues/5477)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Backing image with sync failure ([5481](https://github.com/longhorn/longhorn/issues/5481)) - @ChanYiLin @roger-ryao
|
|
||||||
- [BUG] Example of data migration doesn't work for hidden/./dot-files) ([5484](https://github.com/longhorn/longhorn/issues/5484)) - @hedefalk @shuo-wu @chriscchien
|
|
||||||
- [BUG] test case test_dr_volume_with_backup_block_deletion failed ([5489](https://github.com/longhorn/longhorn/issues/5489)) - @yangchiu @derekbit
|
|
||||||
|
|
||||||
## Misc
|
|
||||||
|
|
||||||
- [TASK][UI] add new recurring job tasks ([5272](https://github.com/longhorn/longhorn/issues/5272)) - @smallteeths @chriscchien
|
|
||||||
|
|
||||||
## Contributors
|
|
||||||
|
|
||||||
- @ChanYiLin
|
|
||||||
- @PhanLe1010
|
|
||||||
- @achims311
|
|
||||||
- @c3y1huang
|
|
||||||
- @chriscchien
|
|
||||||
- @derekbit
|
|
||||||
- @hedefalk
|
|
||||||
- @innobead
|
|
||||||
- @mantissahz
|
|
||||||
- @roger-ryao
|
|
||||||
- @shuo-wu
|
|
||||||
- @smallteeths
|
|
||||||
- @weizhe0422
|
|
||||||
- @yangchiu
|
|
@ -1,92 +0,0 @@
|
|||||||
## Release Note
|
|
||||||
### **v1.4.2 released!** 🎆
|
|
||||||
|
|
||||||
Longhorn v1.4.2 is the latest stable version of Longhorn 1.4.
|
|
||||||
It introduces improvements and bug fixes in the areas of stability, performance, space efficiency, resilience, and so on. Please try it out and provide feedback. Thanks for all the contributions!
|
|
||||||
|
|
||||||
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing v1.4.2.**
|
|
||||||
|
|
||||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.2/deploy/install/).
|
|
||||||
|
|
||||||
## Upgrade
|
|
||||||
|
|
||||||
> **Please read the [important notes](https://longhorn.io/docs/1.4.2/deploy/important-notes/) first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.2 from v1.3.x/v1.4.x, which are only supported source versions.**
|
|
||||||
|
|
||||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.2/deploy/upgrade/).
|
|
||||||
|
|
||||||
## Deprecation & Incompatibilities
|
|
||||||
|
|
||||||
N/A
|
|
||||||
|
|
||||||
## Known Issues after Release
|
|
||||||
|
|
||||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
|
||||||
|
|
||||||
|
|
||||||
## Highlights
|
|
||||||
|
|
||||||
- [IMPROVEMENT] Use PDB to protect Longhorn components from unexpected drains ([3304](https://github.com/longhorn/longhorn/issues/3304)) - @yangchiu @PhanLe1010
|
|
||||||
- [IMPROVEMENT] Introduce timeout mechanism for the sparse file syncing service ([4305](https://github.com/longhorn/longhorn/issues/4305)) - @yangchiu @ChanYiLin
|
|
||||||
- [IMPROVEMENT] Recurring jobs create new snapshots while being not able to clean up old ones ([4898](https://github.com/longhorn/longhorn/issues/4898)) - @mantissahz @chriscchien
|
|
||||||
|
|
||||||
## Improvement
|
|
||||||
|
|
||||||
- [IMPROVEMENT] Support bundle collects dmesg, syslog and related information of longhorn nodes ([5073](https://github.com/longhorn/longhorn/issues/5073)) - @weizhe0422 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Fix BackingImage uploading/downloading flow to prevent client timeout ([5443](https://github.com/longhorn/longhorn/issues/5443)) - @ChanYiLin @chriscchien
|
|
||||||
- [IMPROVEMENT] Create a new setting so that Longhorn removes PDB for instance-manager-r that doesn't have any running instance inside it ([5549](https://github.com/longhorn/longhorn/issues/5549)) - @PhanLe1010 @khushboo-rancher
|
|
||||||
- [IMPROVEMENT] Deprecate the setting `allow-node-drain-with-last-healthy-replica` and replace it by `node-drain-policy` setting ([5585](https://github.com/longhorn/longhorn/issues/5585)) - @yangchiu @PhanLe1010
|
|
||||||
- [IMPROVEMENT][UI] Recurring jobs create new snapshots while being not able to clean up old one ([5610](https://github.com/longhorn/longhorn/issues/5610)) - @mantissahz @smallteeths @roger-ryao
|
|
||||||
- [IMPROVEMENT] Only activate replica if it doesn't have deletion timestamp during volume engine upgrade ([5632](https://github.com/longhorn/longhorn/issues/5632)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Clean up backup target if the backup target setting is unset ([5655](https://github.com/longhorn/longhorn/issues/5655)) - @yangchiu @ChanYiLin
|
|
||||||
|
|
||||||
## Resilience
|
|
||||||
|
|
||||||
- [BUG] Directly mark replica as failed if the node is deleted ([5542](https://github.com/longhorn/longhorn/issues/5542)) - @weizhe0422 @roger-ryao
|
|
||||||
- [BUG] RWX volume is stuck at detaching when the attached node is down ([5558](https://github.com/longhorn/longhorn/issues/5558)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Backup monitor gets stuck in an infinite loop if backup isn't found ([5662](https://github.com/longhorn/longhorn/issues/5662)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Resources such as replicas are somehow not mutated when network is unstable ([5762](https://github.com/longhorn/longhorn/issues/5762)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Instance manager may not update instance status for a minute after starting ([5809](https://github.com/longhorn/longhorn/issues/5809)) - @ejweber @chriscchien
|
|
||||||
|
|
||||||
## Bugs
|
|
||||||
|
|
||||||
- [BUG] Delete a uploading backing image, the corresponding LH temp file is not deleted ([3682](https://github.com/longhorn/longhorn/issues/3682)) - @ChanYiLin @chriscchien
|
|
||||||
- [BUG] Can not create backup in engine image not fully deployed cluster ([5248](https://github.com/longhorn/longhorn/issues/5248)) - @ChanYiLin @roger-ryao
|
|
||||||
- [BUG] Upgrade engine --> spec.restoreVolumeRecurringJob and spec.snapshotDataIntegrity Unsupported value ([5485](https://github.com/longhorn/longhorn/issues/5485)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Bulk backup deletion cause restoring volume to finish with attached state. ([5506](https://github.com/longhorn/longhorn/issues/5506)) - @ChanYiLin @roger-ryao
|
|
||||||
- [BUG] volume expansion starts for no reason, gets stuck on current size > expected size ([5513](https://github.com/longhorn/longhorn/issues/5513)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] RWX volume attachment failed if tried more enough times ([5537](https://github.com/longhorn/longhorn/issues/5537)) - @yangchiu @derekbit
|
|
||||||
- [BUG] instance-manager-e emits `Wait for process pvc-xxxx to shutdown` constantly ([5575](https://github.com/longhorn/longhorn/issues/5575)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Support bundle kit should respect node selector & taint toleration ([5614](https://github.com/longhorn/longhorn/issues/5614)) - @yangchiu @c3y1huang
|
|
||||||
- [BUG] Value overlapped in page Instance Manager Image ([5622](https://github.com/longhorn/longhorn/issues/5622)) - @smallteeths @chriscchien
|
|
||||||
- [BUG] Instance manager PDB created with wrong selector thus blocking the draining of the wrongly selected node forever ([5680](https://github.com/longhorn/longhorn/issues/5680)) - @PhanLe1010 @chriscchien
|
|
||||||
- [BUG] During volume live engine upgrade, if the replica pod is killed, the volume is stuck in upgrading forever ([5684](https://github.com/longhorn/longhorn/issues/5684)) - @yangchiu @PhanLe1010
|
|
||||||
- [BUG] Instance manager PDBs cannot be removed if the longhorn-manager pod on its spec node is not available ([5688](https://github.com/longhorn/longhorn/issues/5688)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [BUG] Rebuild rebuilding is possibly issued to a wrong replica ([5709](https://github.com/longhorn/longhorn/issues/5709)) - @ejweber @roger-ryao
|
|
||||||
- [BUG] longhorn upgrade is not upgrading engineimage ([5740](https://github.com/longhorn/longhorn/issues/5740)) - @shuo-wu @chriscchien
|
|
||||||
- [BUG] `test_replica_auto_balance_when_replica_on_unschedulable_node` Error in creating volume with nodeSelector and dataLocality parameters ([5745](https://github.com/longhorn/longhorn/issues/5745)) - @c3y1huang @roger-ryao
|
|
||||||
- [BUG] Unable to backup volume after NFS server IP change ([5856](https://github.com/longhorn/longhorn/issues/5856)) - @derekbit @roger-ryao
|
|
||||||
|
|
||||||
## Misc
|
|
||||||
|
|
||||||
- [TASK] Check and update the networking doc & example YAMLs ([5651](https://github.com/longhorn/longhorn/issues/5651)) - @yangchiu @shuo-wu
|
|
||||||
|
|
||||||
## Contributors
|
|
||||||
|
|
||||||
- @ChanYiLin
|
|
||||||
- @PhanLe1010
|
|
||||||
- @c3y1huang
|
|
||||||
- @chriscchien
|
|
||||||
- @derekbit
|
|
||||||
- @ejweber
|
|
||||||
- @innobead
|
|
||||||
- @khushboo-rancher
|
|
||||||
- @mantissahz
|
|
||||||
- @roger-ryao
|
|
||||||
- @shuo-wu
|
|
||||||
- @smallteeths
|
|
||||||
- @weizhe0422
|
|
||||||
- @yangchiu
|
|
@ -1,74 +0,0 @@
|
|||||||
## Release Note
|
|
||||||
### **v1.4.3 released!** 🎆
|
|
||||||
|
|
||||||
Longhorn v1.4.3 is the latest stable version of Longhorn 1.4.
|
|
||||||
It introduces improvements and bug fixes in the areas of stability, resilience, and so on. Please try it out and provide feedback. Thanks for all the contributions!
|
|
||||||
|
|
||||||
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing v1.4.3.**
|
|
||||||
|
|
||||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.3/deploy/install/).
|
|
||||||
|
|
||||||
## Upgrade
|
|
||||||
|
|
||||||
> **Please read the [important notes](https://longhorn.io/docs/1.4.3/deploy/important-notes/) first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.3 from v1.3.x/v1.4.x, which are only supported source versions.**
|
|
||||||
|
|
||||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.3/deploy/upgrade/).
|
|
||||||
|
|
||||||
## Deprecation & Incompatibilities
|
|
||||||
|
|
||||||
N/A
|
|
||||||
|
|
||||||
## Known Issues after Release
|
|
||||||
|
|
||||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
|
||||||
|
|
||||||
|
|
||||||
## Improvement
|
|
||||||
|
|
||||||
- [IMPROVEMENT] Assign the pods to the same node where the strict-local volume is present ([5448](https://github.com/longhorn/longhorn/issues/5448)) - @c3y1huang @chriscchien
|
|
||||||
|
|
||||||
## Resilience
|
|
||||||
|
|
||||||
- [BUG] filesystem corrupted after delete instance-manager-r for a locality best-effort volume ([5801](https://github.com/longhorn/longhorn/issues/5801)) - @yangchiu @ChanYiLin @mantissahz
|
|
||||||
|
|
||||||
## Bugs
|
|
||||||
|
|
||||||
- [BUG] 'Upgrade Engine' still shows up in a specific situation when engine already upgraded ([3063](https://github.com/longhorn/longhorn/issues/3063)) - @weizhe0422 @PhanLe1010 @smallteeths
|
|
||||||
- [BUG] DR volume even after activation remains in standby mode if there are one or more failed replicas. ([3069](https://github.com/longhorn/longhorn/issues/3069)) - @yangchiu @mantissahz
|
|
||||||
- [BUG] Prevent Longhorn uninstallation from getting stuck due to backups in error ([5868](https://github.com/longhorn/longhorn/issues/5868)) - @ChanYiLin @mantissahz
|
|
||||||
- [BUG] Unable to create support bundle if the previous one stayed in ReadyForDownload phase ([5882](https://github.com/longhorn/longhorn/issues/5882)) - @c3y1huang @roger-ryao
|
|
||||||
- [BUG] share-manager for a given pvc keep restarting (other pvc are working fine) ([5954](https://github.com/longhorn/longhorn/issues/5954)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Replica auto-rebalance doesn't respect node selector ([5971](https://github.com/longhorn/longhorn/issues/5971)) - @c3y1huang @roger-ryao
|
|
||||||
- [BUG] Extra snapshot generated when clone from a detached volume ([5986](https://github.com/longhorn/longhorn/issues/5986)) - @weizhe0422 @ejweber
|
|
||||||
- [BUG] User created snapshot deleted after node drain and uncordon ([5992](https://github.com/longhorn/longhorn/issues/5992)) - @yangchiu @mantissahz
|
|
||||||
- [BUG] In some specific situation, system backup auto deleted when creating another one ([6045](https://github.com/longhorn/longhorn/issues/6045)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Backing Image deletion stuck if it's deleted during uploading process and bids is ready-for-transfer state ([6086](https://github.com/longhorn/longhorn/issues/6086)) - @WebberHuang1118 @chriscchien
|
|
||||||
- [BUG] Backing image manager fails when SELinux is enabled ([6108](https://github.com/longhorn/longhorn/issues/6108)) - @ejweber @chriscchien
|
|
||||||
- [BUG] test_dr_volume_with_restore_command_error failed ([6130](https://github.com/longhorn/longhorn/issues/6130)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] Longhorn doesn't remove the system backups crd on uninstallation ([6185](https://github.com/longhorn/longhorn/issues/6185)) - @c3y1huang @khushboo-rancher
|
|
||||||
- [BUG] Test case test_ha_backup_deletion_recovery failed in rhel or rockylinux arm64 environment ([6213](https://github.com/longhorn/longhorn/issues/6213)) - @yangchiu @ChanYiLin @mantissahz
|
|
||||||
- [BUG] Engine continues to attempt to rebuild replica while detaching ([6217](https://github.com/longhorn/longhorn/issues/6217)) - @yangchiu @ejweber
|
|
||||||
- [BUG] Unable to receive support bundle from UI when it's large (400MB+) ([6256](https://github.com/longhorn/longhorn/issues/6256)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Migration test case failed: unable to detach volume migration is not ready yet ([6238](https://github.com/longhorn/longhorn/issues/6238)) - @yangchiu @PhanLe1010 @khushboo-rancher
|
|
||||||
- [BUG] Restored Volumes stuck in attaching state ([6239](https://github.com/longhorn/longhorn/issues/6239)) - @derekbit @roger-ryao
|
|
||||||
|
|
||||||
## Contributors
|
|
||||||
|
|
||||||
- @ChanYiLin
|
|
||||||
- @PhanLe1010
|
|
||||||
- @WebberHuang1118
|
|
||||||
- @c3y1huang
|
|
||||||
- @chriscchien
|
|
||||||
- @derekbit
|
|
||||||
- @ejweber
|
|
||||||
- @innobead
|
|
||||||
- @khushboo-rancher
|
|
||||||
- @mantissahz
|
|
||||||
- @roger-ryao
|
|
||||||
- @smallteeths
|
|
||||||
- @weizhe0422
|
|
||||||
- @yangchiu
|
|
@ -1,301 +0,0 @@
|
|||||||
## Release Note
|
|
||||||
### **v1.5.0 released!** 🎆
|
|
||||||
|
|
||||||
Longhorn v1.5.0 is the latest version of Longhorn 1.5.
|
|
||||||
It introduces many enhancements, improvements, and bug fixes as described below including performance, stability, maintenance, resilience, and so on. Please try it and feedback. Thanks for all the contributions!
|
|
||||||
|
|
||||||
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
|
||||||
|
|
||||||
- [v2 Data Engine based on SPDK - Preview](https://github.com/longhorn/longhorn/issues/5751)
|
|
||||||
> **Please note that this is a preview feature, so should not be used in any production environment. A preview feature is disabled by default and would be changed in the following versions until it becomes general availability.**
|
|
||||||
|
|
||||||
In addition to the existing iSCSI stack (v1) data engine, we are introducing the v2 data engine based on SPDK (Storage Performance Development Kit). This release includes the introduction of volume lifecycle management, degraded volume handling, offline replica rebuilding, block device management, and orphaned replica management. For the performance benchmark and comparison with v1, check the report [here](https://longhorn.io/docs/1.5.0/spdk/performance-benchmark/).
|
|
||||||
|
|
||||||
- [Longhorn Volume Attachment](https://github.com/longhorn/longhorn/issues/3715)
|
|
||||||
Introducing the new Longhorn VolumeAttachment CR, which ensures exclusive attachment and supports automatic volume attachment and detachment for various headless operations such as volume cloning, backing image export, and recurring jobs.
|
|
||||||
|
|
||||||
- [Cluster Autoscaler - GA](https://github.com/longhorn/longhorn/issues/5238)
|
|
||||||
Cluster Autoscaler was initially introduced as an experimental feature in v1.3. After undergoing automatic validation on different public cloud Kubernetes distributions and receiving user feedback, it has now reached general availability.
|
|
||||||
|
|
||||||
- [Instance Manager Engine & Replica Consolidation](https://github.com/longhorn/longhorn/issues/5208)
|
|
||||||
Previously, there were two separate instance manager pods responsible for volume engine and replica process management. However, this setup required high resource usage, especially during live upgrades. In this release, we have merged these pods into a single instance manager, reducing the initial resource requirements.
|
|
||||||
|
|
||||||
- [Volume Backup Compression Methods](https://github.com/longhorn/longhorn/issues/5189)
|
|
||||||
Longhorn supports different compression methods for volume backups, including lz4, gzip, or no compression. This allows users to choose the most suitable method based on their data type and usage requirements.
|
|
||||||
|
|
||||||
- [Automatic Volume Trim Recurring Job](https://github.com/longhorn/longhorn/issues/5186)
|
|
||||||
While volume filesystem trim was introduced in v1.4, users had to perform the operation manually. From this release, users can create a recurring job that automatically runs the trim process, improving space efficiency without requiring human intervention.
|
|
||||||
|
|
||||||
- [RWX Volume Trim](https://github.com/longhorn/longhorn/issues/5143)
|
|
||||||
Longhorn supports filesystem trim for RWX (Read-Write-Many) volumes, expanding the trim functionality beyond RWO (Read-Write-Once) volumes only.
|
|
||||||
|
|
||||||
- [Upgrade Path Enforcement & Downgrade Prevention](https://github.com/longhorn/longhorn/issues/5131)
|
|
||||||
To ensure compatibility after an upgrade, we have implemented upgrade path enforcement. This prevents unintended downgrades and ensures the system and data remain intact.
|
|
||||||
|
|
||||||
- [Backing Image Management via CSI VolumeSnapshot](https://github.com/longhorn/longhorn/issues/5005)
|
|
||||||
Users can now utilize the unified CSI VolumeSnapshot interface to manage Backing Images similar to volume snapshots and backups.
|
|
||||||
|
|
||||||
- [Snapshot Cleanup & Delete Recurring Job](https://github.com/longhorn/longhorn/issues/3836)
|
|
||||||
Introducing two new recurring job types specifically designed for snapshot cleanup and deletion. These jobs allow users to remove unnecessary snapshots for better space efficiency.
|
|
||||||
|
|
||||||
- [CIFS Backup Store](https://github.com/longhorn/longhorn/issues/3599) & [Azure Backup Store](https://github.com/longhorn/longhorn/issues/1309)
|
|
||||||
To enhance users' backup strategies and align with data governance policies, Longhorn now supports additional backup storage protocols, including CIFS and Azure.
|
|
||||||
|
|
||||||
- [Kubernetes Upgrade Node Drain Policy](https://github.com/longhorn/longhorn/issues/3304)
|
|
||||||
The new Node Drain Policy provides flexible strategies to protect volume data during Kubernetes upgrades or node maintenance operations. This ensures the integrity and availability of your volumes.
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.5.0.**
|
|
||||||
|
|
||||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.5.0/deploy/install/).
|
|
||||||
|
|
||||||
## Upgrade
|
|
||||||
|
|
||||||
> **Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.5.0 from v1.4.x. Only support upgrading from 1.4.x.**
|
|
||||||
|
|
||||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.5.0/deploy/upgrade/).
|
|
||||||
|
|
||||||
## Deprecation & Incompatibilities
|
|
||||||
|
|
||||||
Please check the [important notes](https://longhorn.io/docs/1.5.0/deploy/important-notes/) to know more about deprecated, removed, incompatible features and important changes. If you upgrade indirectly from an older version like v1.3.x, please also check the corresponding important note for each upgrade version path.
|
|
||||||
|
|
||||||
## Known Issues after Release
|
|
||||||
|
|
||||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
|
||||||
|
|
||||||
## Highlights
|
|
||||||
|
|
||||||
- [DOC] Provide the user guide for Kubernetes upgrade ([494](https://github.com/longhorn/longhorn/issues/494)) - @PhanLe1010
|
|
||||||
- [FEATURE] Backups to Azure Blob Storage ([1309](https://github.com/longhorn/longhorn/issues/1309)) - @mantissahz @chriscchien
|
|
||||||
- [IMPROVEMENT] Use PDB to protect Longhorn components from unexpected drains ([3304](https://github.com/longhorn/longhorn/issues/3304)) - @yangchiu @PhanLe1010
|
|
||||||
- [FEATURE] CIFS Backup Store Support ([3599](https://github.com/longhorn/longhorn/issues/3599)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] Consolidate volume attach/detach implementation ([3715](https://github.com/longhorn/longhorn/issues/3715)) - @yangchiu @PhanLe1010
|
|
||||||
- [IMPROVEMENT] Periodically clean up volume snapshots ([3836](https://github.com/longhorn/longhorn/issues/3836)) - @c3y1huang @chriscchien
|
|
||||||
- [IMPROVEMENT] Introduce timeout mechanism for the sparse file syncing service ([4305](https://github.com/longhorn/longhorn/issues/4305)) - @yangchiu @ChanYiLin
|
|
||||||
- [IMPROVEMENT] Recurring jobs create new snapshots while being not able to clean up old ones ([4898](https://github.com/longhorn/longhorn/issues/4898)) - @mantissahz @chriscchien
|
|
||||||
- [FEATURE] BackingImage Management via VolumeSnapshot ([5005](https://github.com/longhorn/longhorn/issues/5005)) - @ChanYiLin @chriscchien
|
|
||||||
- [FEATURE] Upgrade path enforcement & downgrade prevention ([5131](https://github.com/longhorn/longhorn/issues/5131)) - @yangchiu @mantissahz
|
|
||||||
- [FEATURE] Support RWX volume trim ([5143](https://github.com/longhorn/longhorn/issues/5143)) - @derekbit @chriscchien
|
|
||||||
- [FEATURE] Auto Trim via recurring job ([5186](https://github.com/longhorn/longhorn/issues/5186)) - @c3y1huang @chriscchien
|
|
||||||
- [FEATURE] Introduce faster compression and multiple threads for volume backup & restore ([5189](https://github.com/longhorn/longhorn/issues/5189)) - @derekbit @roger-ryao
|
|
||||||
- [FEATURE] Consolidate Instance Manager Engine & Replica for resource consumption reduction ([5208](https://github.com/longhorn/longhorn/issues/5208)) - @yangchiu @c3y1huang
|
|
||||||
- [FEATURE] Cluster Autoscaler Support GA ([5238](https://github.com/longhorn/longhorn/issues/5238)) - @yangchiu @c3y1huang
|
|
||||||
- [FEATURE] Update K8s version support and component/pkg/build dependencies for Longhorn 1.5 ([5595](https://github.com/longhorn/longhorn/issues/5595)) - @yangchiu @ejweber
|
|
||||||
- [FEATURE] Support SPDK Data Engine - Preview ([5751](https://github.com/longhorn/longhorn/issues/5751)) - @derekbit @shuo-wu @DamiaSan
|
|
||||||
|
|
||||||
## Enhancements
|
|
||||||
|
|
||||||
- [FEATURE] Allow users to directly activate a restoring/DR volume as long as there is one ready replica. ([1512](https://github.com/longhorn/longhorn/issues/1512)) - @mantissahz @weizhe0422
|
|
||||||
- [REFACTOR] volume controller refactoring/split up, to simplify the control flow ([2527](https://github.com/longhorn/longhorn/issues/2527)) - @PhanLe1010 @chriscchien
|
|
||||||
- [FEATURE] Import and export SPDK longhorn volumes to longhorn sparse file directory ([4100](https://github.com/longhorn/longhorn/issues/4100)) - @DamiaSan
|
|
||||||
- [FEATURE] Add a global `storage reserved` setting for newly created longhorn nodes' disks ([4773](https://github.com/longhorn/longhorn/issues/4773)) - @mantissahz @chriscchien
|
|
||||||
- [FEATURE] Support backup volumes during system backup ([5011](https://github.com/longhorn/longhorn/issues/5011)) - @c3y1huang @chriscchien
|
|
||||||
- [FEATURE] Support SPDK lvol shallow copy for newly replica creation ([5217](https://github.com/longhorn/longhorn/issues/5217)) - @DamiaSan
|
|
||||||
- [FEATURE] Introduce longhorn-spdk-engine for SPDK volume management ([5282](https://github.com/longhorn/longhorn/issues/5282)) - @shuo-wu
|
|
||||||
- [FEATURE] Support replica-zone-soft-anti-affinity setting per volume ([5358](https://github.com/longhorn/longhorn/issues/5358)) - @ChanYiLin @smallteeths @chriscchien
|
|
||||||
- [FEATURE] Install Opt-In NetworkPolicies ([5403](https://github.com/longhorn/longhorn/issues/5403)) - @yangchiu @ChanYiLin
|
|
||||||
- [FEATURE] Create Longhorn SPDK Engine component with basic fundamental functions ([5406](https://github.com/longhorn/longhorn/issues/5406)) - @shuo-wu
|
|
||||||
- [FEATURE] Add status APIs for shallow copy and IO pause/resume ([5647](https://github.com/longhorn/longhorn/issues/5647)) - @DamiaSan
|
|
||||||
- [FEATURE] Introduce a new disk type, disk management and replica scheduler for SPDK volumes ([5683](https://github.com/longhorn/longhorn/issues/5683)) - @derekbit @roger-ryao
|
|
||||||
- [FEATURE] Support replica scheduling for SPDK volume ([5711](https://github.com/longhorn/longhorn/issues/5711)) - @derekbit
|
|
||||||
- [FEATURE] Create SPDK gRPC service for instance manager ([5712](https://github.com/longhorn/longhorn/issues/5712)) - @shuo-wu
|
|
||||||
- [FEATURE] Environment check script for Longhorn with SPDK ([5738](https://github.com/longhorn/longhorn/issues/5738)) - @derekbit @chriscchien
|
|
||||||
- [FEATURE] Deployment manifests for helping install SPDK dependencies, utilities and libraries ([5739](https://github.com/longhorn/longhorn/issues/5739)) - @yangchiu @derekbit
|
|
||||||
- [FEATURE] Implement Disk gRPC Service in Instance Manager for collecting SPDK disk statistics from SPDK gRPC service ([5744](https://github.com/longhorn/longhorn/issues/5744)) - @derekbit @chriscchien
|
|
||||||
- [FEATURE] Support for SPDK RAID1 by setting the minimum number of base_bdevs to 1 ([5758](https://github.com/longhorn/longhorn/issues/5758)) - @yangchiu @DamiaSan
|
|
||||||
- [FEATURE] Add a global setting for enabling and disabling SPDK feature ([5778](https://github.com/longhorn/longhorn/issues/5778)) - @yangchiu @derekbit
|
|
||||||
- [FEATURE] Identify and manage orphaned lvols and raid bdevs if the associated `Volume` resources are not existing ([5827](https://github.com/longhorn/longhorn/issues/5827)) - @yangchiu @derekbit
|
|
||||||
- [FEATURE] Longhorn UI for SPDK feature ([5846](https://github.com/longhorn/longhorn/issues/5846)) - @smallteeths @chriscchien
|
|
||||||
- [FEATURE] UI modification to work with new AD mechanism (Longhorn UI -> Longhorn API) ([6004](https://github.com/longhorn/longhorn/issues/6004)) - @yangchiu @smallteeths
|
|
||||||
- [FEATURE] Replica offline rebuild over SPDK - data engine ([6067](https://github.com/longhorn/longhorn/issues/6067)) - @shuo-wu
|
|
||||||
- [FEATURE] Support automatic offline replica rebuilding of volumes using SPDK data engine ([6071](https://github.com/longhorn/longhorn/issues/6071)) - @yangchiu @derekbit
|
|
||||||
|
|
||||||
## Improvement
|
|
||||||
|
|
||||||
- [IMPROVEMENT] Do not count the failure replica reuse failure caused by the disconnection ([1923](https://github.com/longhorn/longhorn/issues/1923)) - @yangchiu @mantissahz
|
|
||||||
- [IMPROVEMENT] Consider changing the over provisioning default/recommendation to 100% percentage (no over provisioning) ([2694](https://github.com/longhorn/longhorn/issues/2694)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] StorageClass of pv and pvc of a recovered pv should not always be default. ([3506](https://github.com/longhorn/longhorn/issues/3506)) - @ChanYiLin @smallteeths @roger-ryao
|
|
||||||
- [IMPROVEMENT] Auto-attach volume for K8s CSI snapshot ([3726](https://github.com/longhorn/longhorn/issues/3726)) - @weizhe0422 @PhanLe1010
|
|
||||||
- [IMPROVEMENT] Change Longhorn API to create/delete snapshot CRs instead of calling engine CLI ([3995](https://github.com/longhorn/longhorn/issues/3995)) - @yangchiu @PhanLe1010
|
|
||||||
- [IMPROVEMENT] Add support for crypto parameters for RWX volumes ([4829](https://github.com/longhorn/longhorn/issues/4829)) - @mantissahz @roger-ryao
|
|
||||||
- [IMPROVEMENT] Remove the global setting `mkfs-ext4-parameters` ([4914](https://github.com/longhorn/longhorn/issues/4914)) - @ejweber @roger-ryao
|
|
||||||
- [IMPROVEMENT] Move all snapshot related settings at one place. ([4930](https://github.com/longhorn/longhorn/issues/4930)) - @smallteeths @roger-ryao
|
|
||||||
- [IMPROVEMENT] Remove system managed component image settings ([5028](https://github.com/longhorn/longhorn/issues/5028)) - @mantissahz @chriscchien
|
|
||||||
- [IMPROVEMENT] Set default `engine-replica-timeout` value for engine controller start command ([5031](https://github.com/longhorn/longhorn/issues/5031)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] Support bundle collects dmesg, syslog and related information of longhorn nodes ([5073](https://github.com/longhorn/longhorn/issues/5073)) - @weizhe0422 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Collect volume, system, feature info for metrics for better usage awareness ([5235](https://github.com/longhorn/longhorn/issues/5235)) - @c3y1huang @chriscchien @roger-ryao
|
|
||||||
- [IMPROVEMENT] Update uninstallation info to include the 'Deleting Confirmation Flag' in chart ([5250](https://github.com/longhorn/longhorn/issues/5250)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Disable Revision Counter for Strict-Local dataLocality ([5257](https://github.com/longhorn/longhorn/issues/5257)) - @derekbit @roger-ryao
|
|
||||||
- [IMPROVEMENT] Fix Guaranteed Engine Manager CPU recommendation formula in UI ([5338](https://github.com/longhorn/longhorn/issues/5338)) - @c3y1huang @smallteeths @roger-ryao
|
|
||||||
- [IMPROVEMENT] Update PSP validation in the Longhorn upstream chart ([5339](https://github.com/longhorn/longhorn/issues/5339)) - @yangchiu @PhanLe1010
|
|
||||||
- [IMPROVEMENT] Update ganesha nfs to 4.2.3 ([5356](https://github.com/longhorn/longhorn/issues/5356)) - @derekbit @roger-ryao
|
|
||||||
- [IMPROVEMENT] Set write-cache of longhorn block device to off explicitly ([5382](https://github.com/longhorn/longhorn/issues/5382)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] Clean up unused backupstore mountpoint ([5391](https://github.com/longhorn/longhorn/issues/5391)) - @derekbit @chriscchien
|
|
||||||
- [DOC] Update Kubernetes version info to have consistent description from the longhorn documentation in chart ([5399](https://github.com/longhorn/longhorn/issues/5399)) - @ChanYiLin @roger-ryao
|
|
||||||
- [IMPROVEMENT] Fix BackingImage uploading/downloading flow to prevent client timeout ([5443](https://github.com/longhorn/longhorn/issues/5443)) - @ChanYiLin @chriscchien
|
|
||||||
- [IMPROVEMENT] Assign the pods to the same node where the strict-local volume is present ([5448](https://github.com/longhorn/longhorn/issues/5448)) - @c3y1huang @chriscchien
|
|
||||||
- [IMPROVEMENT] Have explicitly message when trying to attach a volume which it's engine and replica were on deleted node ([5545](https://github.com/longhorn/longhorn/issues/5545)) - @ChanYiLin @chriscchien
|
|
||||||
- [IMPROVEMENT] Create a new setting so that Longhorn removes PDB for instance-manager-r that doesn't have any running instance inside it ([5549](https://github.com/longhorn/longhorn/issues/5549)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Merge conversion/admission webhook and recovery backend services into longhorn-manager ([5590](https://github.com/longhorn/longhorn/issues/5590)) - @ChanYiLin @chriscchien
|
|
||||||
- [IMPROVEMENT][UI] Recurring jobs create new snapshots while being not able to clean up old one ([5610](https://github.com/longhorn/longhorn/issues/5610)) - @mantissahz @smallteeths @roger-ryao
|
|
||||||
- [IMPROVEMENT] Only activate replica if it doesn't have deletion timestamp during volume engine upgrade ([5632](https://github.com/longhorn/longhorn/issues/5632)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [IMPROVEMENT] Clean up backup target if the backup target setting is unset ([5655](https://github.com/longhorn/longhorn/issues/5655)) - @yangchiu @ChanYiLin
|
|
||||||
- [IMPROVEMENT] Bump CSI sidecar components' version ([5672](https://github.com/longhorn/longhorn/issues/5672)) - @yangchiu @ejweber
|
|
||||||
- [IMPROVEMENT] Configure log level of Longhorn components ([5888](https://github.com/longhorn/longhorn/issues/5888)) - @ChanYiLin @weizhe0422
|
|
||||||
- [IMPROVEMENT] Remove development toolchain from Longhorn images ([6022](https://github.com/longhorn/longhorn/issues/6022)) - @ChanYiLin @derekbit
|
|
||||||
- [IMPROVEMENT] Reduce replica process's number of allocated ports ([6079](https://github.com/longhorn/longhorn/issues/6079)) - @ChanYiLin @derekbit
|
|
||||||
- [IMPROVEMENT] UI supports automatic replica rebuilding for SPDK volumes ([6107](https://github.com/longhorn/longhorn/issues/6107)) - @smallteeths @roger-ryao
|
|
||||||
- [IMPROVEMENT] Minor UX changes for Longhorn SPDK ([6126](https://github.com/longhorn/longhorn/issues/6126)) - @derekbit @roger-ryao
|
|
||||||
- [IMPROVEMENT] Instance manager spdk_tgt resilience due to spdk_tgt crash ([6155](https://github.com/longhorn/longhorn/issues/6155)) - @yangchiu @derekbit
|
|
||||||
- [IMPROVEMENT] Determine number of replica/engine port count in longhorn-manager (control plane) instead ([6163](https://github.com/longhorn/longhorn/issues/6163)) - @derekbit @chriscchien
|
|
||||||
- [IMPROVEMENT] SPDK client should functions after encountering decoding error ([6191](https://github.com/longhorn/longhorn/issues/6191)) - @yangchiu @shuo-wu
|
|
||||||
|
|
||||||
## Performance
|
|
||||||
|
|
||||||
- [REFACTORING] Evaluate the impact of removing the client side compression for backup blocks ([1409](https://github.com/longhorn/longhorn/issues/1409)) - @derekbit
|
|
||||||
|
|
||||||
## Resilience
|
|
||||||
|
|
||||||
- [BUG] If backing image downloading fails on one node, it doesn't try on other nodes. ([3746](https://github.com/longhorn/longhorn/issues/3746)) - @ChanYiLin
|
|
||||||
- [BUG] Replica rebuilding caused by rke2/kubelet restart ([5340](https://github.com/longhorn/longhorn/issues/5340)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Volume restoration will never complete if attached node is down ([5464](https://github.com/longhorn/longhorn/issues/5464)) - @derekbit @weizhe0422 @chriscchien
|
|
||||||
- [BUG] Node disconnection test failed ([5476](https://github.com/longhorn/longhorn/issues/5476)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Physical node down test failed ([5477](https://github.com/longhorn/longhorn/issues/5477)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Backing image with sync failure ([5481](https://github.com/longhorn/longhorn/issues/5481)) - @ChanYiLin @roger-ryao
|
|
||||||
- [BUG] share-manager pod failed to restart after kubelet restart ([5507](https://github.com/longhorn/longhorn/issues/5507)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Directly mark replica as failed if the node is deleted ([5542](https://github.com/longhorn/longhorn/issues/5542)) - @weizhe0422 @roger-ryao
|
|
||||||
- [BUG] RWX volume is stuck at detaching when the attached node is down ([5558](https://github.com/longhorn/longhorn/issues/5558)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Unable to export RAID1 bdev in degraded state ([5650](https://github.com/longhorn/longhorn/issues/5650)) - @chriscchien @DamiaSan
|
|
||||||
- [BUG] Backup monitor gets stuck in an infinite loop if backup isn't found ([5662](https://github.com/longhorn/longhorn/issues/5662)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Resources such as replicas are somehow not mutated when network is unstable ([5762](https://github.com/longhorn/longhorn/issues/5762)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] filesystem corrupted after delete instance-manager-r for a locality best-effort volume ([5801](https://github.com/longhorn/longhorn/issues/5801)) - @yangchiu @ChanYiLin @mantissahz
|
|
||||||
|
|
||||||
## Stability
|
|
||||||
|
|
||||||
- [BUG] nfs backup broken - NFS server: mkdir - file exists ([4626](https://github.com/longhorn/longhorn/issues/4626)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Memory leak in CSI plugin caused by stuck umount processes if the RWX volume is already gone ([5296](https://github.com/longhorn/longhorn/issues/5296)) - @derekbit @roger-ryao
|
|
||||||
|
|
||||||
## Bugs
|
|
||||||
|
|
||||||
- [BUG] 'Upgrade Engine' still shows up in a specific situation when engine already upgraded ([3063](https://github.com/longhorn/longhorn/issues/3063)) - @weizhe0422 @PhanLe1010 @smallteeths
|
|
||||||
- [BUG] DR volume even after activation remains in standby mode if there are one or more failed replicas. ([3069](https://github.com/longhorn/longhorn/issues/3069)) - @yangchiu @mantissahz
|
|
||||||
- [BUG] volume not able to attach with raw type backing image ([3437](https://github.com/longhorn/longhorn/issues/3437)) - @yangchiu @ChanYiLin
|
|
||||||
- [BUG] Delete a uploading backing image, the corresponding LH temp file is not deleted ([3682](https://github.com/longhorn/longhorn/issues/3682)) - @ChanYiLin @chriscchien
|
|
||||||
- [BUG] Cloned PVC from detached volume will stuck at not ready for workload ([3692](https://github.com/longhorn/longhorn/issues/3692)) - @PhanLe1010 @chriscchien
|
|
||||||
- [BUG] Block device volume failed to unmount when it is detached unexpectedly ([3778](https://github.com/longhorn/longhorn/issues/3778)) - @PhanLe1010 @chriscchien
|
|
||||||
- [BUG] After migration of Longhorn from Rancher old UI to dashboard, the csi-plugin doesn't update ([4519](https://github.com/longhorn/longhorn/issues/4519)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] Volumes Stuck in Attach/Detach Loop when running on OpenShift/OKD ([4988](https://github.com/longhorn/longhorn/issues/4988)) - @ChanYiLin
|
|
||||||
- [BUG] Longhorn 1.3.2 fails to backup & restore volumes behind Internet proxy ([5054](https://github.com/longhorn/longhorn/issues/5054)) - @mantissahz @chriscchien
|
|
||||||
- [BUG] Instance manager pod does not respect of node taint? ([5161](https://github.com/longhorn/longhorn/issues/5161)) - @ejweber
|
|
||||||
- [BUG] RWX doesn't work with release 1.4.0 due to end grace update error from recovery backend ([5183](https://github.com/longhorn/longhorn/issues/5183)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Incorrect indentation of charts/questions.yaml ([5196](https://github.com/longhorn/longhorn/issues/5196)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] Updating option "Allow snapshots removal during trim" for old volumes failed ([5218](https://github.com/longhorn/longhorn/issues/5218)) - @shuo-wu @roger-ryao
|
|
||||||
- [BUG] Since 1.4.0 RWX volume failing regularly ([5224](https://github.com/longhorn/longhorn/issues/5224)) - @derekbit
|
|
||||||
- [BUG] Can not create backup in engine image not fully deployed cluster ([5248](https://github.com/longhorn/longhorn/issues/5248)) - @ChanYiLin @roger-ryao
|
|
||||||
- [BUG] Incorrect router retry mechanism ([5259](https://github.com/longhorn/longhorn/issues/5259)) - @mantissahz @chriscchien
|
|
||||||
- [BUG] System Backup is stuck at Uploading if there are PVs not provisioned by CSI driver ([5286](https://github.com/longhorn/longhorn/issues/5286)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Sync up with backup target during DR volume activation ([5292](https://github.com/longhorn/longhorn/issues/5292)) - @yangchiu @weizhe0422
|
|
||||||
- [BUG] environment_check.sh does not handle different kernel versions in cluster correctly ([5304](https://github.com/longhorn/longhorn/issues/5304)) - @achims311 @roger-ryao
|
|
||||||
- [BUG] instance-manager-r high memory consumption ([5312](https://github.com/longhorn/longhorn/issues/5312)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Unable to upgrade longhorn from v1.3.2 to master-head ([5368](https://github.com/longhorn/longhorn/issues/5368)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Modify engineManagerCPURequest and replicaManagerCPURequest won't raise resource request in instance-manager-e pod ([5419](https://github.com/longhorn/longhorn/issues/5419)) - @c3y1huang
|
|
||||||
- [BUG] Error message not consistent between create/update recurring job when retain number greater than 50 ([5434](https://github.com/longhorn/longhorn/issues/5434)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Do not copy Host header to API requests forwarded to Longhorn Manager ([5438](https://github.com/longhorn/longhorn/issues/5438)) - @yangchiu @smallteeths
|
|
||||||
- [BUG] RWX Volume attachment is getting Failed ([5456](https://github.com/longhorn/longhorn/issues/5456)) - @derekbit
|
|
||||||
- [BUG] test case test_backup_lock_deletion_during_restoration failed ([5458](https://github.com/longhorn/longhorn/issues/5458)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Unable to create support bundle agent pod in air-gap environment ([5467](https://github.com/longhorn/longhorn/issues/5467)) - @yangchiu @c3y1huang
|
|
||||||
- [BUG] Example of data migration doesn't work for hidden/./dot-files) ([5484](https://github.com/longhorn/longhorn/issues/5484)) - @hedefalk @shuo-wu @chriscchien
|
|
||||||
- [BUG] Upgrade engine --> spec.restoreVolumeRecurringJob and spec.snapshotDataIntegrity Unsupported value ([5485](https://github.com/longhorn/longhorn/issues/5485)) - @yangchiu @derekbit
|
|
||||||
- [BUG] test case test_dr_volume_with_backup_block_deletion failed ([5489](https://github.com/longhorn/longhorn/issues/5489)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Bulk backup deletion cause restoring volume to finish with attached state. ([5506](https://github.com/longhorn/longhorn/issues/5506)) - @ChanYiLin @roger-ryao
|
|
||||||
- [BUG] volume expansion starts for no reason, gets stuck on current size > expected size ([5513](https://github.com/longhorn/longhorn/issues/5513)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] RWX volume attachment failed if tried more enough times ([5537](https://github.com/longhorn/longhorn/issues/5537)) - @yangchiu @derekbit
|
|
||||||
- [BUG] instance-manager-e emits `Wait for process pvc-xxxx to shutdown` constantly ([5575](https://github.com/longhorn/longhorn/issues/5575)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Support bundle kit should respect node selector & taint toleration ([5614](https://github.com/longhorn/longhorn/issues/5614)) - @yangchiu @c3y1huang
|
|
||||||
- [BUG] Value overlapped in page Instance Manager Image ([5622](https://github.com/longhorn/longhorn/issues/5622)) - @smallteeths @chriscchien
|
|
||||||
- [BUG] Updated Rocky 9 (and others) can't attach due to SELinux ([5627](https://github.com/longhorn/longhorn/issues/5627)) - @yangchiu @ejweber
|
|
||||||
- [BUG] Fix misleading error messages when creating a mount point for a backup store ([5630](https://github.com/longhorn/longhorn/issues/5630)) - @derekbit
|
|
||||||
- [BUG] Instance manager PDB created with wrong selector thus blocking the draining of the wrongly selected node forever ([5680](https://github.com/longhorn/longhorn/issues/5680)) - @PhanLe1010 @chriscchien
|
|
||||||
- [BUG] During volume live engine upgrade, if the replica pod is killed, the volume is stuck in upgrading forever ([5684](https://github.com/longhorn/longhorn/issues/5684)) - @yangchiu @PhanLe1010
|
|
||||||
- [BUG] Instance manager PDBs cannot be removed if the longhorn-manager pod on its spec node is not available ([5688](https://github.com/longhorn/longhorn/issues/5688)) - @PhanLe1010 @roger-ryao
|
|
||||||
- [BUG] Rebuild rebuilding is possibly issued to a wrong replica ([5709](https://github.com/longhorn/longhorn/issues/5709)) - @ejweber @roger-ryao
|
|
||||||
- [BUG] Observing repilca on new IM-r before upgrading of volume ([5729](https://github.com/longhorn/longhorn/issues/5729)) - @c3y1huang
|
|
||||||
- [BUG] longhorn upgrade is not upgrading engineimage ([5740](https://github.com/longhorn/longhorn/issues/5740)) - @shuo-wu @chriscchien
|
|
||||||
- [BUG] `test_replica_auto_balance_when_replica_on_unschedulable_node` Error in creating volume with nodeSelector and dataLocality parameters ([5745](https://github.com/longhorn/longhorn/issues/5745)) - @c3y1huang @roger-ryao
|
|
||||||
- [BUG] Unable to backup volume after NFS server IP change ([5856](https://github.com/longhorn/longhorn/issues/5856)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Prevent Longhorn uninstallation from getting stuck due to backups in error ([5868](https://github.com/longhorn/longhorn/issues/5868)) - @ChanYiLin @mantissahz
|
|
||||||
- [BUG] Unable to create support bundle if the previous one stayed in ReadyForDownload phase ([5882](https://github.com/longhorn/longhorn/issues/5882)) - @c3y1huang @roger-ryao
|
|
||||||
- [BUG] share-manager for a given pvc keep restarting (other pvc are working fine) ([5954](https://github.com/longhorn/longhorn/issues/5954)) - @yangchiu @derekbit
|
|
||||||
- [BUG] Replica auto-rebalance doesn't respect node selector ([5971](https://github.com/longhorn/longhorn/issues/5971)) - @c3y1huang @roger-ryao
|
|
||||||
- [BUG] Volume detached automatically after upgrade Longhorn ([5983](https://github.com/longhorn/longhorn/issues/5983)) - @yangchiu @PhanLe1010
|
|
||||||
- [BUG] Extra snapshot generated when clone from a detached volume ([5986](https://github.com/longhorn/longhorn/issues/5986)) - @weizhe0422 @ejweber
|
|
||||||
- [BUG] User created snapshot deleted after node drain and uncordon ([5992](https://github.com/longhorn/longhorn/issues/5992)) - @yangchiu @mantissahz
|
|
||||||
- [BUG] Webhook PDBs are not removed after upgrading to master-head ([6026](https://github.com/longhorn/longhorn/issues/6026)) - @weizhe0422 @PhanLe1010
|
|
||||||
- [BUG] In some specific situation, system backup auto deleted when creating another one ([6045](https://github.com/longhorn/longhorn/issues/6045)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Backing Image deletion stuck if it's deleted during uploading process and bids is ready-for-transfer state ([6086](https://github.com/longhorn/longhorn/issues/6086)) - @WebberHuang1118 @chriscchien
|
|
||||||
- [BUG] A backup target backed by a Samba server is not recognized ([6100](https://github.com/longhorn/longhorn/issues/6100)) - @derekbit @weizhe0422
|
|
||||||
- [BUG] Backing image manager fails when SELinux is enabled ([6108](https://github.com/longhorn/longhorn/issues/6108)) - @ejweber @chriscchien
|
|
||||||
- [BUG] Force delete volume make SPDK disk unschedule ([6110](https://github.com/longhorn/longhorn/issues/6110)) - @derekbit
|
|
||||||
- [BUG] share-manager terminated during Longhorn upgrading causes rwx volume not working ([6120](https://github.com/longhorn/longhorn/issues/6120)) - @yangchiu @derekbit
|
|
||||||
- [BUG] SPDK Volume snapshotList API Error ([6123](https://github.com/longhorn/longhorn/issues/6123)) - @derekbit @chriscchien
|
|
||||||
- [BUG] test_recurring_jobs_allow_detached_volume failed ([6124](https://github.com/longhorn/longhorn/issues/6124)) - @ChanYiLin @roger-ryao
|
|
||||||
- [BUG] Cron job triggered replica rebuilding keeps repeating itself after corrupting snapshot data ([6129](https://github.com/longhorn/longhorn/issues/6129)) - @yangchiu @mantissahz
|
|
||||||
- [BUG] test_dr_volume_with_restore_command_error failed ([6130](https://github.com/longhorn/longhorn/issues/6130)) - @mantissahz @roger-ryao
|
|
||||||
- [BUG] RWX volume remains attached after workload deleted if it's upgraded from v1.4.2 ([6139](https://github.com/longhorn/longhorn/issues/6139)) - @PhanLe1010 @chriscchien
|
|
||||||
- [BUG] timestamp or checksum not matched in test_snapshot_hash_detect_corruption test case ([6145](https://github.com/longhorn/longhorn/issues/6145)) - @yangchiu @derekbit
|
|
||||||
- [BUG] When a v2 volume is attached in maintenance mode, removing a replica will lead to volume stuck in attaching-detaching loop ([6166](https://github.com/longhorn/longhorn/issues/6166)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Misleading offline rebuilding hint if offline rebuilding is not enabled ([6169](https://github.com/longhorn/longhorn/issues/6169)) - @smallteeths @roger-ryao
|
|
||||||
- [BUG] Longhorn doesn't remove the system backups crd on uninstallation ([6185](https://github.com/longhorn/longhorn/issues/6185)) - @c3y1huang @khushboo-rancher
|
|
||||||
- [BUG] Volume attachment related error logs in uninstaller pod ([6197](https://github.com/longhorn/longhorn/issues/6197)) - @yangchiu @PhanLe1010
|
|
||||||
- [BUG] Test case test_ha_backup_deletion_recovery failed in rhel or rockylinux arm64 environment ([6213](https://github.com/longhorn/longhorn/issues/6213)) - @yangchiu @ChanYiLin @mantissahz
|
|
||||||
- [BUG] migration test cases could fail due to unexpected volume controllers and replicas status ([6215](https://github.com/longhorn/longhorn/issues/6215)) - @yangchiu @PhanLe1010
|
|
||||||
- [BUG] Engine continues to attempt to rebuild replica while detaching ([6217](https://github.com/longhorn/longhorn/issues/6217)) - @yangchiu @ejweber
|
|
||||||
|
|
||||||
## Misc
|
|
||||||
|
|
||||||
- [TASK] Remove deprecated volume spec recurringJobs and storageClass recurringJobs field ([2865](https://github.com/longhorn/longhorn/issues/2865)) - @c3y1huang @chriscchien
|
|
||||||
- [TASK] Remove deprecated fields after CRD API version bump ([3289](https://github.com/longhorn/longhorn/issues/3289)) - @c3y1huang @roger-ryao
|
|
||||||
- [TASK] Replace jobq lib with an alternative way for listing remote backup volumes and info ([4176](https://github.com/longhorn/longhorn/issues/4176)) - @ChanYiLin @chriscchien
|
|
||||||
- [DOC] Update the Longhorn document in Uninstalling Longhorn using kubectl ([4841](https://github.com/longhorn/longhorn/issues/4841)) - @roger-ryao
|
|
||||||
- [TASK] Remove a deprecated feature `disable-replica-rebuild` from longhorn-manager ([4997](https://github.com/longhorn/longhorn/issues/4997)) - @ejweber @chriscchien
|
|
||||||
- [TASK] Update the distro matrix supports on Longhorn docs for 1.5 ([5177](https://github.com/longhorn/longhorn/issues/5177)) - @yangchiu
|
|
||||||
- [TASK] Clarify if any upcoming K8s API deprecation/removal will impact Longhorn 1.4 ([5180](https://github.com/longhorn/longhorn/issues/5180)) - @PhanLe1010
|
|
||||||
- [TASK] Revert affinity for Longhorn user deployed components ([5191](https://github.com/longhorn/longhorn/issues/5191)) - @weizhe0422 @ejweber
|
|
||||||
- [TASK] Add GitHub action for CI to lib repos for supporting dependency bot ([5239](https://github.com/longhorn/longhorn/issues/5239)) -
|
|
||||||
- [DOC] Update the readme of longhorn-spdk-engine about using new Longhorn (RAID1) bdev ([5256](https://github.com/longhorn/longhorn/issues/5256)) - @DamiaSan
|
|
||||||
- [TASK][UI] add new recurring job tasks ([5272](https://github.com/longhorn/longhorn/issues/5272)) - @smallteeths @chriscchien
|
|
||||||
- [DOC] Update the node maintenance doc to cover upgrade prerequisites for Rancher ([5278](https://github.com/longhorn/longhorn/issues/5278)) - @PhanLe1010
|
|
||||||
- [TASK] Run build-engine-test-images automatically when having incompatible engine on master ([5400](https://github.com/longhorn/longhorn/issues/5400)) - @yangchiu
|
|
||||||
- [TASK] Update k8s.gcr.io to registry.k8s.io in repos ([5432](https://github.com/longhorn/longhorn/issues/5432)) - @yangchiu
|
|
||||||
- [TASK][UI] add new recurring job task - filesystem trim ([5529](https://github.com/longhorn/longhorn/issues/5529)) - @smallteeths @chriscchien
|
|
||||||
- doc: update prerequisites in chart readme to make it consistent with documentation v1.3.x ([5531](https://github.com/longhorn/longhorn/pull/5531)) - @ChanYiLin
|
|
||||||
- [FEATURE] Remove deprecated `allow-node-drain-with-last-healthy-replica` ([5620](https://github.com/longhorn/longhorn/issues/5620)) - @weizhe0422 @PhanLe1010
|
|
||||||
- [FEATURE] Set recurring jobs to PVCs ([5791](https://github.com/longhorn/longhorn/issues/5791)) - @yangchiu @c3y1huang
|
|
||||||
- [TASK] Automatically update crds.yaml in longhorn repo from longhorn-manager repo ([5854](https://github.com/longhorn/longhorn/issues/5854)) - @yangchiu
|
|
||||||
- [IMPROVEMENT] Remove privilege requirement from lifecycle jobs ([5862](https://github.com/longhorn/longhorn/issues/5862)) - @mantissahz @chriscchien
|
|
||||||
- [TASK][UI] support new aio typed instance managers ([5876](https://github.com/longhorn/longhorn/issues/5876)) - @smallteeths @chriscchien
|
|
||||||
- [TASK] Remove `Guaranteed Engine Manager CPU`, `Guaranteed Replica Manager CPU`, and `Guaranteed Engine CPU` settings. ([5917](https://github.com/longhorn/longhorn/issues/5917)) - @c3y1huang @roger-ryao
|
|
||||||
- [TASK][UI] Support volume backup policy ([6028](https://github.com/longhorn/longhorn/issues/6028)) - @smallteeths @chriscchien
|
|
||||||
- [TASK] Reduce BackupConcurrentLimit and RestoreConcurrentLimit default values ([6135](https://github.com/longhorn/longhorn/issues/6135)) - @derekbit @chriscchien
|
|
||||||
|
|
||||||
## Contributors
|
|
||||||
|
|
||||||
- @ChanYiLin
|
|
||||||
- @DamiaSan
|
|
||||||
- @PhanLe1010
|
|
||||||
- @WebberHuang1118
|
|
||||||
- @achims311
|
|
||||||
- @c3y1huang
|
|
||||||
- @chriscchien
|
|
||||||
- @derekbit
|
|
||||||
- @ejweber
|
|
||||||
- @hedefalk
|
|
||||||
- @innobead
|
|
||||||
- @khushboo-rancher
|
|
||||||
- @mantissahz
|
|
||||||
- @roger-ryao
|
|
||||||
- @shuo-wu
|
|
||||||
- @smallteeths
|
|
||||||
- @weizhe0422
|
|
||||||
- @yangchiu
|
|
@ -1,65 +0,0 @@
|
|||||||
## Release Note
|
|
||||||
### **v1.5.1 released!** 🎆
|
|
||||||
|
|
||||||
Longhorn v1.5.1 is the latest version of Longhorn 1.5.
|
|
||||||
This release introduces bug fixes as described below about 1.5.0 upgrade issues, stability, troubleshooting and so on. Please try it and feedback. Thanks for all the contributions!
|
|
||||||
|
|
||||||
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing v1.5.1.**
|
|
||||||
|
|
||||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.5.1/deploy/install/).
|
|
||||||
|
|
||||||
## Upgrade
|
|
||||||
|
|
||||||
> **Please read the [important notes](https://longhorn.io/docs/1.5.1/deploy/important-notes/) first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.5.1 from v1.4.x/v1.5.0, which are only supported source versions.**
|
|
||||||
|
|
||||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.5.1/deploy/upgrade/).
|
|
||||||
|
|
||||||
## Deprecation & Incompatibilities
|
|
||||||
|
|
||||||
N/A
|
|
||||||
|
|
||||||
## Known Issues after Release
|
|
||||||
|
|
||||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
|
||||||
|
|
||||||
## Improvement
|
|
||||||
|
|
||||||
- [IMPROVEMENT] Implement/fix the unit tests of Volume Attachment and volume controller ([6005](https://github.com/longhorn/longhorn/issues/6005)) - @PhanLe1010
|
|
||||||
- [QUESTION] Repetitive warnings and errors in a new longhorn setup ([6257](https://github.com/longhorn/longhorn/issues/6257)) - @derekbit @c3y1huang @roger-ryao
|
|
||||||
|
|
||||||
## Resilience
|
|
||||||
|
|
||||||
- [BUG] 1.5.0 Upgrade: Longhorn conversion webhook server fails ([6259](https://github.com/longhorn/longhorn/issues/6259)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Race leaves snapshot CRs that cannot be deleted ([6298](https://github.com/longhorn/longhorn/issues/6298)) - @yangchiu @PhanLe1010 @ejweber
|
|
||||||
|
|
||||||
## Bugs
|
|
||||||
|
|
||||||
- [BUG] Engine continues to attempt to rebuild replica while detaching ([6217](https://github.com/longhorn/longhorn/issues/6217)) - @yangchiu @ejweber
|
|
||||||
- [BUG] Upgrade to 1.5.0 failed: validator.longhorn.io denied the request if having orphan resources ([6246](https://github.com/longhorn/longhorn/issues/6246)) - @derekbit @roger-ryao
|
|
||||||
- [BUG] Unable to receive support bundle from UI when it's large (400MB+) ([6256](https://github.com/longhorn/longhorn/issues/6256)) - @c3y1huang @chriscchien
|
|
||||||
- [BUG] Longhorn Manager Pods CrashLoop after upgrade from 1.4.0 to 1.5.0 while backing up volumes ([6264](https://github.com/longhorn/longhorn/issues/6264)) - @ChanYiLin @roger-ryao
|
|
||||||
- [BUG] Can not delete type=`bi` VolumeSnapshot if related backing image not exist ([6266](https://github.com/longhorn/longhorn/issues/6266)) - @ChanYiLin @chriscchien
|
|
||||||
- [BUG] 1.5.0: AttachVolume.Attach failed for volume, the volume is currently attached to a different node ([6287](https://github.com/longhorn/longhorn/issues/6287)) - @yangchiu @derekbit
|
|
||||||
- [BUG] test case test_setting_priority_class failed in master and v1.5.x ([6319](https://github.com/longhorn/longhorn/issues/6319)) - @derekbit @chriscchien
|
|
||||||
- [BUG] Unused webhook and recovery backend deployment left in helm chart ([6252](https://github.com/longhorn/longhorn/issues/6252)) - @ChanYiLin @chriscchien
|
|
||||||
|
|
||||||
## Misc
|
|
||||||
|
|
||||||
- [DOC] v1.5.0 additional outgoing firewall ports need to be opened 9501 9502 9503 ([6317](https://github.com/longhorn/longhorn/issues/6317)) - @ChanYiLin @chriscchien
|
|
||||||
|
|
||||||
## Contributors
|
|
||||||
|
|
||||||
- @ChanYiLin
|
|
||||||
- @PhanLe1010
|
|
||||||
- @c3y1huang
|
|
||||||
- @chriscchien
|
|
||||||
- @derekbit
|
|
||||||
- @ejweber
|
|
||||||
- @innobead
|
|
||||||
- @roger-ryao
|
|
||||||
- @yangchiu
|
|
||||||
|
|
@ -1,8 +1,4 @@
|
|||||||
The list of current Longhorn maintainers:
|
The list of current Longhorn maintainers:
|
||||||
|
|
||||||
Name, <Email>, @GitHubHandle
|
Name, <Email>, @GitHubHandle
|
||||||
Sheng Yang, <sheng@yasker.org>, @yasker
|
Sheng Yang, <sheng.yang@rancher.com>, @yasker
|
||||||
Shuo Wu, <shuo.wu@suse.com>, @shuo-wu
|
|
||||||
David Ko, <dko@suse.com>, @innobead
|
|
||||||
Derek Su, <derek.su@suse.com>, @derekbit
|
|
||||||
Phan Le, <phan.le@suse.com>, @PhanLe1010
|
|
||||||
|
333
README.md
333
README.md
@ -1,137 +1,290 @@
|
|||||||
<h1 align="center" style="border-bottom: none">
|
# Longhorn
|
||||||
<a href="https://longhorn.io/" target="_blank"><img alt="Longhorn" width="120px" src="https://github.com/longhorn/website/blob/master/static/img/icon-longhorn.svg"></a><br>Longhorn
|
|
||||||
</h1>
|
|
||||||
|
|
||||||
<p align="center">A CNCF Incubating Project. Visit <a href="https://longhorn.io/" target="_blank">longhorn.io</a> for the full documentation.</p>
|
### Build Status
|
||||||
|
* Engine: [](https://drone-publish.rancher.io/longhorn/longhorn-engine) [](https://goreportcard.com/report/github.com/rancher/longhorn-engine)
|
||||||
|
* Instance Manager: [](http://drone-publish.rancher.io/longhorn/longhorn-instance-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-instance-manager)
|
||||||
|
* Manager: [](https://drone-publish.rancher.io/longhorn/longhorn-manager)[](https://goreportcard.com/report/github.com/rancher/longhorn-manager)
|
||||||
|
* UI: [](https://drone-publish.rancher.io/longhorn/longhorn-ui)
|
||||||
|
* Test: [](http://drone-publish.rancher.io/longhorn/longhorn-tests)
|
||||||
|
|
||||||
<div align="center">
|
### Overview
|
||||||
|
Longhorn is a distributed block storage system for Kubernetes.
|
||||||
|
|
||||||
[](https://github.com/longhorn/longhorn/releases)
|
Longhorn is lightweight, reliable, and powerful. You can install Longhorn on an existing Kubernetes cluster with one `kubectl apply` command or using Helm charts. Once Longhorn is installed, it adds persistent volume support to the Kubernetes cluster.
|
||||||
[](https://github.com/longhorn/longhorn/blob/master/LICENSE)
|
|
||||||
[](https://longhorn.io/docs/latest/)
|
|
||||||
|
|
||||||
</div>
|
|
||||||
|
|
||||||
Longhorn is a distributed block storage system for Kubernetes. Longhorn is cloud-native storage built using Kubernetes and container primitives.
|
|
||||||
|
|
||||||
Longhorn is lightweight, reliable, and powerful. You can install Longhorn on an existing Kubernetes cluster with one `kubectl apply`command or by using Helm charts. Once Longhorn is installed, it adds persistent volume support to the Kubernetes cluster.
|
|
||||||
|
|
||||||
Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Here are some notable features of Longhorn:
|
Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Here are some notable features of Longhorn:
|
||||||
|
|
||||||
1. Enterprise-grade distributed storage with no single point of failure
|
1. Enterprise-grade distributed storage with no single point of failure
|
||||||
2. Incremental snapshot of block storage
|
2. Incremental snapshot of block storage
|
||||||
3. Backup to secondary storage (NFSv4 or S3-compatible object storage) built on efficient change block detection
|
3. Backup to secondary storage (NFS or S3-compatible object storage) built on efficient change block detection
|
||||||
4. Recurring snapshot and backup
|
4. Recurring snapshot and backup
|
||||||
5. Automated non-disruptive upgrade. You can upgrade the entire Longhorn software stack without disrupting running volumes!
|
5. Automated non-disruptive upgrade. You can upgrade the entire Longhorn software stack without disrupting running volumes!
|
||||||
6. Intuitive GUI dashboard
|
6. Intuitive GUI dashboard
|
||||||
|
|
||||||
You can read more technical details of Longhorn [here](https://longhorn.io/).
|
You can read more technical details of Longhorn [here](http://rancher.com/microservices-block-storage/).
|
||||||
|
|
||||||
# Releases
|
## Current status
|
||||||
|
|
||||||
> **NOTE**:
|
Longhorn is beta-quality software. We appreciate your willingness to deploy Longhorn and provide feedback.
|
||||||
> - __\<version\>*__ means the release branch is under active support and will have periodic follow-up patch releases.
|
|
||||||
> - __Latest__ release means the version is the latest release of the newest release branch.
|
|
||||||
> - __Stable__ release means the version is stable and has been widely adopted by users.
|
|
||||||
|
|
||||||
https://github.com/longhorn/longhorn/releases
|
The latest release of Longhorn is **v0.7.0**.
|
||||||
|
|
||||||
| Release | Version | Type | Release Note (Changelog) | Important Note |
|
|
||||||
|-----------|---------|----------------|----------------------------------------------------------------|-------------------------------------------------------------|
|
|
||||||
| **1.5*** | 1.5.1 | Latest | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.5.1) | [🔗](https://longhorn.io/docs/1.5.1/deploy/important-notes) |
|
|
||||||
| **1.4*** | 1.4.4 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.4.4) | [🔗](https://longhorn.io/docs/1.4.4/deploy/important-notes) |
|
|
||||||
| 1.3 | 1.3.3 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.3.3) | [🔗](https://longhorn.io/docs/1.3.3/deploy/important-notes) |
|
|
||||||
| 1.2 | 1.2.6 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.2.6) | [🔗](https://longhorn.io/docs/1.2.6/deploy/important-notes) |
|
|
||||||
| 1.1 | 1.1.3 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.1.3) | |
|
|
||||||
|
|
||||||
# Roadmap
|
|
||||||
|
|
||||||
https://github.com/longhorn/longhorn/wiki/Roadmap
|
|
||||||
|
|
||||||
# Components
|
|
||||||
|
|
||||||
|
## Source code
|
||||||
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
||||||
|
|
||||||
* Engine: [](https://drone-publish.longhorn.io/longhorn/longhorn-engine)[](https://goreportcard.com/report/github.com/longhorn/longhorn-engine)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-engine?ref=badge_shield)
|
1. Longhorn engine -- Core controller/replica logic https://github.com/longhorn/longhorn-engine
|
||||||
* Manager: [](https://drone-publish.longhorn.io/longhorn/longhorn-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-manager?ref=badge_shield)
|
1. Longhorn manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/longhorn/longhorn-manager
|
||||||
* Instance Manager: [](http://drone-publish.longhorn.io/longhorn/longhorn-instance-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-instance-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-instance-manager?ref=badge_shield)
|
1. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui
|
||||||
* Share Manager: [](http://drone-publish.longhorn.io/longhorn/longhorn-share-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-share-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-share-manager?ref=badge_shield)
|
|
||||||
* Backing Image Manager: [](http://drone-publish.longhorn.io/longhorn/backing-image-manager)[](https://goreportcard.com/report/github.com/longhorn/backing-image-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Fbacking-image-manager?ref=badge_shield)
|
|
||||||
* UI: [](https://drone-publish.longhorn.io/longhorn/longhorn-ui)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-ui?ref=badge_shield)
|
|
||||||
|
|
||||||
| Component | What it does | GitHub repo |
|
|
||||||
| :----------------------------- | :--------------------------------------------------------------------- | :------------------------------------------------------------------------------------------ |
|
|
||||||
| Longhorn Backing Image Manager | Backing image download, sync, and deletion in a disk | [longhorn/backing-image-manager](https://github.com/longhorn/backing-image-manager) |
|
|
||||||
| Longhorn Engine | Core controller/replica logic | [longhorn/longhorn-engine](https://github.com/longhorn/longhorn-engine) |
|
|
||||||
| Longhorn Instance Manager | Controller/replica instance lifecycle management | [longhorn/longhorn-instance-manager](https://github.com/longhorn/longhorn-instance-manager) |
|
|
||||||
| Longhorn Manager | Longhorn orchestration, includes CSI driver for Kubernetes | [longhorn/longhorn-manager](https://github.com/longhorn/longhorn-manager) |
|
|
||||||
| Longhorn Share Manager | NFS provisioner that exposes Longhorn volumes as ReadWriteMany volumes | [longhorn/longhorn-share-manager](https://github.com/longhorn/longhorn-share-manager) |
|
|
||||||
| Longhorn UI | The Longhorn dashboard | [longhorn/longhorn-ui](https://github.com/longhorn/longhorn-ui) |
|
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
# Get Started
|
# Requirements
|
||||||
|
|
||||||
## Requirements
|
1. Docker v1.13+
|
||||||
|
2. Kubernetes v1.14+.
|
||||||
|
3. `open-iscsi` has been installed on all the nodes of the Kubernetes cluster.
|
||||||
|
1. For GKE, recommended Ubuntu as guest OS image since it contains open-iscsi already.
|
||||||
|
2. For Debian/Ubuntu, use `apt-get install open-iscsi` to install.
|
||||||
|
3. For RHEL/CentOS, use `yum install iscsi-initiator-utils` to install.
|
||||||
|
4. A host filesystem supports `file extents` feature on the nodes to store the data. Currently we support:
|
||||||
|
1. ext4
|
||||||
|
2. XFS
|
||||||
|
|
||||||
For the installation requirements, refer to the [Longhorn documentation.](https://longhorn.io/docs/latest/deploy/install/#installation-requirements)
|
# Install
|
||||||
|
|
||||||
## Installation
|
## On Kubernetes clusters Managed by Rancher 2.1 or newer
|
||||||
|
|
||||||
> **NOTE**:
|
The easiest way to install Longhorn is to deploy Longhorn from Rancher Catalog.
|
||||||
> Please note that the master branch is for the upcoming feature release development.
|
|
||||||
> For an official release installation or upgrade, please refer to the below ways.
|
|
||||||
|
|
||||||
Longhorn can be installed on a Kubernetes cluster in several ways:
|
1. On Rancher UI, select the cluster and project you want to install Longhorn. We recommended to create a new project e.g. `Storage` for Longhorn.
|
||||||
|
2. Navigate to the `Catalog Apps` screen. Select `Launch`, find Longhorn in the list. Select `View Details`, then click `Launch`. Longhorn will be installed in the `longhorn-system` namespace.
|
||||||
|
|
||||||
- [Rancher App Marketplace](https://longhorn.io/docs/latest/deploy/install/install-with-rancher/)
|
After Longhorn has been successfully installed, you can access the Longhorn UI by navigating to the `Catalog Apps` screen.
|
||||||
- [kubectl](https://longhorn.io/docs/latest/deploy/install/install-with-kubectl/)
|
|
||||||
- [Helm](https://longhorn.io/docs/latest/deploy/install/install-with-helm/)
|
|
||||||
|
|
||||||
## Documentation
|
One benefit of installing Longhorn through Rancher catalog is Rancher provides authentication to Longhorn UI.
|
||||||
|
|
||||||
The official Longhorn documentation is [here.](https://longhorn.io/docs)
|
If there is a new version of Longhorn available, you will see an `Upgrade Available` sign on the `Catalog Apps` screen. You can click `Upgrade` button to upgrade Longhorn manager. See more about upgrade [here](#upgrade).
|
||||||
|
|
||||||
# Get Involved
|
## On any Kubernetes cluster
|
||||||
|
|
||||||
## Discussion, Feedback
|
### Install Longhorn with kubectl
|
||||||
|
You can install Longhorn on any Kubernetes cluster using following command:
|
||||||
|
|
||||||
If having any discussions or feedbacks, feel free to [file a discussion](https://github.com/longhorn/longhorn/discussions).
|
```
|
||||||
|
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
|
||||||
|
```
|
||||||
|
Google Kubernetes Engine (GKE) requires additional setup in order for Longhorn to function properly. If your are a GKE user, read [this page](docs/gke.md) before proceeding.
|
||||||
|
|
||||||
## Features Request, Bug Reporting
|
### Install Longhorn with Helm
|
||||||
|
First, you need to initialize Helm locally and [install Tiller into your Kubernetes cluster with RBAC](https://helm.sh/docs/using_helm/#role-based-access-control).
|
||||||
|
|
||||||
If having any issues, feel free to [file an issue](https://github.com/longhorn/longhorn/issues/new/choose).
|
Then download Longhorn repository:
|
||||||
We have a weekly community issue review meeting to review all reported issues or enhancement requests.
|
```
|
||||||
|
git clone https://github.com/longhorn/longhorn.git
|
||||||
|
```
|
||||||
|
|
||||||
When creating a bug issue, please help upload the support bundle to the issue or send to
|
Now using following command to install Longhorn:
|
||||||
[longhorn-support-bundle](mailto:longhorn-support-bundle@suse.com).
|
* Helm2
|
||||||
|
```
|
||||||
|
helm install ./longhorn/chart --name longhorn --namespace longhorn-system
|
||||||
|
```
|
||||||
|
* Helm3
|
||||||
|
```
|
||||||
|
kubectl create namespace longhorn-system
|
||||||
|
helm install longhorn ./longhorn/chart/ --namespace longhorn-system
|
||||||
|
```
|
||||||
|
---
|
||||||
|
|
||||||
## Report Vulnerabilities
|
Longhorn will be installed in the namespace `longhorn-system`
|
||||||
|
|
||||||
If having any vulnerabilities found, please report to [longhorn-security](mailto:longhorn-security@suse.com).
|
One of the two available drivers (CSI and Flexvolume) would be chosen automatically based on the version of Kubernetes you use. See [here](docs/driver.md) for details.
|
||||||
|
|
||||||
# Community
|
A successful CSI-based deployment looks like this:
|
||||||
|
```
|
||||||
|
# kubectl -n longhorn-system get pod
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
csi-attacher-0 1/1 Running 0 6h
|
||||||
|
csi-provisioner-0 1/1 Running 0 6h
|
||||||
|
engine-image-ei-57b85e25-8v65d 1/1 Running 0 7d
|
||||||
|
engine-image-ei-57b85e25-gjjs6 1/1 Running 0 7d
|
||||||
|
engine-image-ei-57b85e25-t2787 1/1 Running 0 7d
|
||||||
|
longhorn-csi-plugin-4cpk2 2/2 Running 0 6h
|
||||||
|
longhorn-csi-plugin-ll6mq 2/2 Running 0 6h
|
||||||
|
longhorn-csi-plugin-smlsh 2/2 Running 0 6h
|
||||||
|
longhorn-driver-deployer-7b5bdcccc8-fbncl 1/1 Running 0 6h
|
||||||
|
longhorn-manager-7x8x8 1/1 Running 0 6h
|
||||||
|
longhorn-manager-8kqf4 1/1 Running 0 6h
|
||||||
|
longhorn-manager-kln4h 1/1 Running 0 6h
|
||||||
|
longhorn-ui-f849dcd85-cgkgg 1/1 Running 0 5d
|
||||||
|
```
|
||||||
|
|
||||||
Longhorn is open source software, so contributions are greatly welcome.
|
### Accessing the UI
|
||||||
Please read [Code of Conduct](./CODE_OF_CONDUCT.md) and [Contributing Guideline](./CONTRIBUTING.md) before contributing.
|
|
||||||
|
|
||||||
Contributing code is not the only way of contributing. We value feedbacks very much and many of the Longhorn features are originated from users' feedback.
|
> For Longhorn v0.8.0+, UI service type has been changed from `LoadBalancer` to `ClusterIP`
|
||||||
If you have any feedbacks, feel free to [file an issue](https://github.com/longhorn/longhorn/issues/new/choose) and talk to the developers at the [CNCF](https://slack.cncf.io/) [#longhorn](https://cloud-native.slack.com/messages/longhorn) Slack channel.
|
|
||||||
|
|
||||||
If having any discussion, feedbacks, requests, issues or security reports, please follow below ways.
|
You can run `kubectl -n longhorn-system get svc` to get Longhorn UI service:
|
||||||
We also have a [CNCF Slack channel: longhorn](https://cloud-native.slack.com/messages/longhorn) for discussion.
|
|
||||||
|
|
||||||
## Community Meeting and Office Hours
|
```
|
||||||
Hosted by the core maintainers of Longhorn: 4th Friday of the every month at 09:00 (CET) or 16:00 (CST) at https://community.cncf.io/longhorn-community/.
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
longhorn-backend ClusterIP 10.20.248.250 <none> 9500/TCP 58m
|
||||||
|
longhorn-frontend ClusterIP 10.20.245.110 <none> 80/TCP 58m
|
||||||
|
|
||||||
## Longhorn Mailing List
|
```
|
||||||
Stay up to date on the latest news and events: https://lists.cncf.io/g/cncf-longhorn
|
|
||||||
|
|
||||||
You can read more about the community and its events here: https://github.com/longhorn/community
|
To access Longhorn UI when installed from YAML manifest, you need to create an ingress controller.
|
||||||
|
|
||||||
# License
|
See more about how to create an Nginx ingress controller with basic authentication [here](https://github.com/longhorn/longhorn/blob/master/docs/longhorn-ingress.md)
|
||||||
|
|
||||||
Copyright (c) 2014-2022 The Longhorn Authors
|
|
||||||
|
# Upgrade
|
||||||
|
|
||||||
|
[See here](docs/upgrade.md) for details.
|
||||||
|
|
||||||
|
## Upgrade Longhorn manager
|
||||||
|
|
||||||
|
##### On Kubernetes clusters Managed by Rancher 2.1 or newer
|
||||||
|
Follow [the same steps for installation](#install) to upgrade Longhorn manager
|
||||||
|
|
||||||
|
##### Using kubectl
|
||||||
|
```
|
||||||
|
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Using Helm
|
||||||
|
```
|
||||||
|
helm upgrade longhorn ./longhorn/chart
|
||||||
|
```
|
||||||
|
|
||||||
|
## Upgrade Longhorn engine
|
||||||
|
After Longhorn Manager was upgraded, Longhorn Engine also need to be upgraded using Longhorn UI. [See here](docs/upgrade.md) for details.
|
||||||
|
|
||||||
|
# Create Longhorn Volumes
|
||||||
|
|
||||||
|
Before you create Kubernetes volumes, you must first create a storage class. Use following command to create a StorageClass called `longhorn`.
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/examples/storageclass.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you can create a pod using Longhorn like this:
|
||||||
|
```
|
||||||
|
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/examples/pvc.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
The above yaml file contains two parts:
|
||||||
|
1. Create a PVC using Longhorn StorageClass.
|
||||||
|
```
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: longhorn-volv-pvc
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
storageClassName: longhorn
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 2Gi
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Use it in the a Pod as a persistent volume:
|
||||||
|
```
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: volume-test
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: volume-test
|
||||||
|
image: nginx:stable-alpine
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
volumeMounts:
|
||||||
|
- name: volv
|
||||||
|
mountPath: /data
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
volumes:
|
||||||
|
- name: volv
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: longhorn-volv-pvc
|
||||||
|
```
|
||||||
|
More examples are available at `./examples/`
|
||||||
|
|
||||||
|
# Documentation
|
||||||
|
|
||||||
|
### [Snapshot and Backup](./docs/snapshot-backup.md)
|
||||||
|
### [Volume operations](./docs/volume.md)
|
||||||
|
### [Settings](./docs/settings.md)
|
||||||
|
### [Multiple disks](./docs/multidisk.md)
|
||||||
|
### [iSCSI](./docs/iscsi.md)
|
||||||
|
### [Kubernetes workload in Longhorn UI](./docs/k8s-workload.md)
|
||||||
|
### [Storage Tags](./docs/storage-tags.md)
|
||||||
|
### [Customized default setting](./docs/customized-default-setting.md)
|
||||||
|
### [Taint Toleration](./docs/taint-toleration.md)
|
||||||
|
### [Volume Expansion](./docs/expansion.md)
|
||||||
|
|
||||||
|
### [Restoring Stateful Set volumes](./docs/restore_statefulset.md)
|
||||||
|
### [Google Kubernetes Engine](./docs/gke.md)
|
||||||
|
### [Deal with Kubernetes node failure](./docs/node-failure.md)
|
||||||
|
### [Use CSI driver on RancherOS/CoreOS + RKE or K3S](./docs/csi-config.md)
|
||||||
|
### [Restore a backup to an image file](./docs/restore-to-file.md)
|
||||||
|
### [Disaster Recovery Volume](./docs/dr-volume.md)
|
||||||
|
### [Recover volume after unexpected detachment](./docs/recover-volume.md)
|
||||||
|
|
||||||
|
# Troubleshooting
|
||||||
|
You can click `Generate Support Bundle` link at the bottom of the UI to download a zip file contains Longhorn related configuration and logs.
|
||||||
|
|
||||||
|
See [here](./docs/troubleshooting.md) for the troubleshooting guide.
|
||||||
|
|
||||||
|
# Uninstall Longhorn
|
||||||
|
|
||||||
|
### Using kubectl
|
||||||
|
1. To prevent damaging the Kubernetes cluster, we recommend deleting all Kubernetes workloads using Longhorn volumes (PersistentVolume, PersistentVolumeClaim, StorageClass, Deployment, StatefulSet, DaemonSet, etc) first.
|
||||||
|
|
||||||
|
2. Create the uninstallation job to clean up CRDs from the system and wait for success:
|
||||||
|
```
|
||||||
|
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/uninstall/uninstall.yaml
|
||||||
|
kubectl get job/longhorn-uninstall -w
|
||||||
|
```
|
||||||
|
|
||||||
|
Example output:
|
||||||
|
```
|
||||||
|
$ kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/uninstall/uninstall.yaml
|
||||||
|
serviceaccount/longhorn-uninstall-service-account created
|
||||||
|
clusterrole.rbac.authorization.k8s.io/longhorn-uninstall-role created
|
||||||
|
clusterrolebinding.rbac.authorization.k8s.io/longhorn-uninstall-bind created
|
||||||
|
job.batch/longhorn-uninstall created
|
||||||
|
|
||||||
|
$ kubectl get job/longhorn-uninstall -w
|
||||||
|
NAME COMPLETIONS DURATION AGE
|
||||||
|
longhorn-uninstall 0/1 3s 3s
|
||||||
|
longhorn-uninstall 1/1 20s 20s
|
||||||
|
^C
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Remove remaining components:
|
||||||
|
```
|
||||||
|
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
|
||||||
|
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/master/uninstall/uninstall.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Tip: If you try `kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml` first and get stuck there,
|
||||||
|
pressing `Ctrl C` then running `kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/uninstall/uninstall.yaml` can also help you remove Longhorn. Finally, don't forget to cleanup remaining components.
|
||||||
|
|
||||||
|
### Using Helm
|
||||||
|
```
|
||||||
|
helm delete longhorn --purge
|
||||||
|
```
|
||||||
|
|
||||||
|
## Community
|
||||||
|
Longhorn is an open source software, so contribution are greatly welcome. Please read [Code of Conduct](./CODE_OF_CONDUCT.md) and [Contributing Guideline](./CONTRIBUTING.md) before contributing.
|
||||||
|
|
||||||
|
Contributing code is not the only way of contributing. We value feedbacks very much and many of the Longhorn features are originated from users' feedback. If you have any feedbacks, feel free to [file an issue](https://github.com/longhorn/longhorn/issues/new?title=*Summarize%20your%20issue%20here*&body=*Describe%20your%20issue%20here*%0A%0A---%0AVersion%3A%20``) and talk to the developers at the [CNCF](https://slack.cncf.io/) [#longhorn](https://cloud-native.slack.com/messages/longhorn) slack channel.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Copyright (c) 2014-2020 The Longhorn Authors
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
|
||||||
|
|
||||||
@ -139,6 +292,6 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use
|
|||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
||||||
|
|
||||||
## Longhorn is a [CNCF Incubating Project](https://www.cncf.io/projects/)
|
### Longhorn is a [CNCF Sandbox Project](https://www.cncf.io/sandbox-projects/)
|
||||||
|
|
||||||

|

|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
name: longhorn
|
name: longhorn
|
||||||
version: 1.6.0-dev
|
version: 0.8.0
|
||||||
appVersion: v1.6.0-dev
|
appVersion: v0.8.0
|
||||||
kubeVersion: ">=1.21.0-0"
|
kubeVersion: ">=v1.14.0-r0"
|
||||||
description: Longhorn is a distributed block storage system for Kubernetes.
|
description: Longhorn is a distributed block storage system for Kubernetes powered by Rancher Labs.
|
||||||
keywords:
|
keywords:
|
||||||
- longhorn
|
- longhorn
|
||||||
- storage
|
- storage
|
||||||
@ -11,18 +11,14 @@ keywords:
|
|||||||
- block
|
- block
|
||||||
- device
|
- device
|
||||||
- iscsi
|
- iscsi
|
||||||
- nfs
|
home: https://github.com/rancher/longhorn
|
||||||
home: https://github.com/longhorn/longhorn
|
|
||||||
sources:
|
sources:
|
||||||
- https://github.com/longhorn/longhorn
|
- https://github.com/rancher/longhorn
|
||||||
- https://github.com/longhorn/longhorn-engine
|
- https://github.com/rancher/longhorn-engine
|
||||||
- https://github.com/longhorn/longhorn-instance-manager
|
- https://github.com/rancher/longhorn-manager
|
||||||
- https://github.com/longhorn/longhorn-share-manager
|
- https://github.com/rancher/longhorn-ui
|
||||||
- https://github.com/longhorn/longhorn-manager
|
- https://github.com/rancher/longhorn-tests
|
||||||
- https://github.com/longhorn/longhorn-ui
|
|
||||||
- https://github.com/longhorn/longhorn-tests
|
|
||||||
- https://github.com/longhorn/backing-image-manager
|
|
||||||
maintainers:
|
maintainers:
|
||||||
- name: Longhorn maintainers
|
- name: rancher
|
||||||
email: maintainers@longhorn.io
|
email: charts@rancher.com
|
||||||
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/longhorn/icon/color/longhorn-icon-color.png
|
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/longhorn/horizontal/color/longhorn-horizontal-color.svg?sanitize=true
|
||||||
|
334
chart/README.md
334
chart/README.md
@ -1,326 +1,46 @@
|
|||||||
# Longhorn Chart
|
# Rancher Longhorn Chart
|
||||||
|
|
||||||
> **Important**: Please install the Longhorn chart in the `longhorn-system` namespace only.
|
Please install Longhorn chart in `longhorn-system` namespace only.
|
||||||
|
|
||||||
> **Warning**: Longhorn doesn't support downgrading from a higher version to a lower version.
|
The following document pertains to running Longhorn from the Rancher 2.0 chart.
|
||||||
|
|
||||||
## Source Code
|
## Source Code
|
||||||
|
|
||||||
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
||||||
|
|
||||||
1. Longhorn Engine -- Core controller/replica logic https://github.com/longhorn/longhorn-engine
|
1. Longhorn Engine -- Core controller/replica logic https://github.com/longhorn/longhorn-engine
|
||||||
2. Longhorn Instance Manager -- Controller/replica instance lifecycle management https://github.com/longhorn/longhorn-instance-manager
|
2. Longhorn Manager -- Longhorn orchestration, includes CSI driver for Kubernetes https://github.com/longhorn/longhorn-manager
|
||||||
3. Longhorn Share Manager -- NFS provisioner that exposes Longhorn volumes as ReadWriteMany volumes. https://github.com/longhorn/longhorn-share-manager
|
3. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui
|
||||||
4. Backing Image Manager -- Backing image file lifecycle management. https://github.com/longhorn/backing-image-manager
|
|
||||||
5. Longhorn Manager -- Longhorn orchestration, includes CSI driver for Kubernetes https://github.com/longhorn/longhorn-manager
|
|
||||||
6. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui
|
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
1. A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.)
|
1. Rancher v2.1+
|
||||||
2. Kubernetes >= v1.21
|
2. Docker v1.13+
|
||||||
3. Make sure `bash`, `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
|
3. Kubernetes v1.14+
|
||||||
4. Make sure `open-iscsi` has been installed, and the `iscsid` daemon is running on all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
|
4. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
|
||||||
|
5. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
|
||||||
## Upgrading to Kubernetes v1.25+
|
|
||||||
|
|
||||||
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
|
|
||||||
|
|
||||||
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `enablePSP` set to `false` if it has been previously set to `true`.
|
|
||||||
|
|
||||||
> **Note:**
|
|
||||||
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
|
|
||||||
>
|
|
||||||
> If your charts get stuck in this state, you may have to clean up your Helm release secrets.
|
|
||||||
Upon setting `enablePSP` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
|
|
||||||
|
|
||||||
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Longhorn docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
1. Add Longhorn chart repository.
|
|
||||||
```
|
|
||||||
helm repo add longhorn https://charts.longhorn.io
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Update local Longhorn chart information from chart repository.
|
|
||||||
```
|
|
||||||
helm repo update
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Install Longhorn chart.
|
|
||||||
- With Helm 2, the following command will create the `longhorn-system` namespace and install the Longhorn chart together.
|
|
||||||
```
|
|
||||||
helm install longhorn/longhorn --name longhorn --namespace longhorn-system
|
|
||||||
```
|
|
||||||
- With Helm 3, the following commands will create the `longhorn-system` namespace first, then install the Longhorn chart.
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl create namespace longhorn-system
|
|
||||||
helm install longhorn longhorn/longhorn --namespace longhorn-system
|
|
||||||
```
|
|
||||||
|
|
||||||
## Uninstallation
|
## Uninstallation
|
||||||
|
|
||||||
With Helm 2 to uninstall Longhorn.
|
1. To prevent damage to the Kubernetes cluster, we recommend deleting all Kubernetes workloads using Longhorn volumes (PersistentVolume, PersistentVolumeClaim, StorageClass, Deployment, StatefulSet, DaemonSet, etc).
|
||||||
|
|
||||||
|
2. From Rancher UI, navigate to `Catalog Apps` tab and delete Longhorn app.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### I deleted the Longhorn App from Rancher UI instead of following the uninstallation procedure
|
||||||
|
|
||||||
|
Redeploy the (same version) Longhorn App. Follow the uninstallation procedure above.
|
||||||
|
|
||||||
|
### Problems with CRDs
|
||||||
|
|
||||||
|
If your CRD instances or the CRDs themselves can't be deleted for whatever reason, run the commands below to clean up. Caution: this will wipe all Longhorn state!
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
# Delete CRD instances and definitions
|
||||||
helm delete longhorn --purge
|
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh |bash -s v062
|
||||||
|
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh |bash -s v070
|
||||||
```
|
```
|
||||||
|
|
||||||
With Helm 3 to uninstall Longhorn.
|
|
||||||
```
|
|
||||||
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
|
||||||
helm uninstall longhorn -n longhorn-system
|
|
||||||
kubectl delete namespace longhorn-system
|
|
||||||
```
|
|
||||||
|
|
||||||
## Values
|
|
||||||
|
|
||||||
The `values.yaml` contains items used to tweak a deployment of this chart.
|
|
||||||
|
|
||||||
### Cattle Settings
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
| global.cattle.systemDefaultRegistry | string | `""` | System default registry |
|
|
||||||
| global.cattle.windowsCluster.defaultSetting.systemManagedComponentsNodeSelector | string | `"kubernetes.io/os:linux"` | Node selector for Longhorn system managed components |
|
|
||||||
| global.cattle.windowsCluster.defaultSetting.taintToleration | string | `"cattle.io/os=linux:NoSchedule"` | Toleration for Longhorn system managed components |
|
|
||||||
| global.cattle.windowsCluster.enabled | bool | `false` | Enable this to allow Longhorn to run on the Rancher deployed Windows cluster |
|
|
||||||
| global.cattle.windowsCluster.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Select Linux nodes to run Longhorn user deployed components |
|
|
||||||
| global.cattle.windowsCluster.tolerations | list | `[{"effect":"NoSchedule","key":"cattle.io/os","operator":"Equal","value":"linux"}]` | Tolerate Linux nodes to run Longhorn user deployed components |
|
|
||||||
|
|
||||||
### Network Policies
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
| networkPolicies.enabled | bool | `false` | Enable NetworkPolicies to limit access to the Longhorn pods |
|
|
||||||
| networkPolicies.type | string | `"k3s"` | Create the policy based on your distribution to allow access for the ingress. Options: `k3s`, `rke2`, `rke1` |
|
|
||||||
|
|
||||||
### Image Settings
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
| image.csi.attacher.repository | string | `"longhornio/csi-attacher"` | Specify CSI attacher image repository. Leave blank to autodetect |
|
|
||||||
| image.csi.attacher.tag | string | `"v4.2.0"` | Specify CSI attacher image tag. Leave blank to autodetect |
|
|
||||||
| image.csi.livenessProbe.repository | string | `"longhornio/livenessprobe"` | Specify CSI liveness probe image repository. Leave blank to autodetect |
|
|
||||||
| image.csi.livenessProbe.tag | string | `"v2.9.0"` | Specify CSI liveness probe image tag. Leave blank to autodetect |
|
|
||||||
| image.csi.nodeDriverRegistrar.repository | string | `"longhornio/csi-node-driver-registrar"` | Specify CSI node driver registrar image repository. Leave blank to autodetect |
|
|
||||||
| image.csi.nodeDriverRegistrar.tag | string | `"v2.7.0"` | Specify CSI node driver registrar image tag. Leave blank to autodetect |
|
|
||||||
| image.csi.provisioner.repository | string | `"longhornio/csi-provisioner"` | Specify CSI provisioner image repository. Leave blank to autodetect |
|
|
||||||
| image.csi.provisioner.tag | string | `"v3.4.1"` | Specify CSI provisioner image tag. Leave blank to autodetect |
|
|
||||||
| image.csi.resizer.repository | string | `"longhornio/csi-resizer"` | Specify CSI driver resizer image repository. Leave blank to autodetect |
|
|
||||||
| image.csi.resizer.tag | string | `"v1.7.0"` | Specify CSI driver resizer image tag. Leave blank to autodetect |
|
|
||||||
| image.csi.snapshotter.repository | string | `"longhornio/csi-snapshotter"` | Specify CSI driver snapshotter image repository. Leave blank to autodetect |
|
|
||||||
| image.csi.snapshotter.tag | string | `"v6.2.1"` | Specify CSI driver snapshotter image tag. Leave blank to autodetect. |
|
|
||||||
| image.longhorn.backingImageManager.repository | string | `"longhornio/backing-image-manager"` | Specify Longhorn backing image manager image repository |
|
|
||||||
| image.longhorn.backingImageManager.tag | string | `"master-head"` | Specify Longhorn backing image manager image tag |
|
|
||||||
| image.longhorn.engine.repository | string | `"longhornio/longhorn-engine"` | Specify Longhorn engine image repository |
|
|
||||||
| image.longhorn.engine.tag | string | `"master-head"` | Specify Longhorn engine image tag |
|
|
||||||
| image.longhorn.instanceManager.repository | string | `"longhornio/longhorn-instance-manager"` | Specify Longhorn instance manager image repository |
|
|
||||||
| image.longhorn.instanceManager.tag | string | `"master-head"` | Specify Longhorn instance manager image tag |
|
|
||||||
| image.longhorn.manager.repository | string | `"longhornio/longhorn-manager"` | Specify Longhorn manager image repository |
|
|
||||||
| image.longhorn.manager.tag | string | `"master-head"` | Specify Longhorn manager image tag |
|
|
||||||
| image.longhorn.shareManager.repository | string | `"longhornio/longhorn-share-manager"` | Specify Longhorn share manager image repository |
|
|
||||||
| image.longhorn.shareManager.tag | string | `"master-head"` | Specify Longhorn share manager image tag |
|
|
||||||
| image.longhorn.supportBundleKit.repository | string | `"longhornio/support-bundle-kit"` | Specify Longhorn support bundle manager image repository |
|
|
||||||
| image.longhorn.supportBundleKit.tag | string | `"v0.0.27"` | Specify Longhorn support bundle manager image tag |
|
|
||||||
| image.longhorn.ui.repository | string | `"longhornio/longhorn-ui"` | Specify Longhorn ui image repository |
|
|
||||||
| image.longhorn.ui.tag | string | `"master-head"` | Specify Longhorn ui image tag |
|
|
||||||
| image.openshift.oauthProxy.repository | string | `"quay.io/openshift/origin-oauth-proxy"` | For openshift user. Specify oauth proxy image repository |
|
|
||||||
| image.openshift.oauthProxy.tag | float | `4.13` | For openshift user. Specify oauth proxy image tag. Note: Use your OCP/OKD 4.X Version, Current Stable is 4.13 |
|
|
||||||
| image.pullPolicy | string | `"IfNotPresent"` | Image pull policy which applies to all user deployed Longhorn Components. e.g, Longhorn manager, Longhorn driver, Longhorn UI |
|
|
||||||
|
|
||||||
### Service Settings
|
|
||||||
|
|
||||||
| Key | Description |
|
|
||||||
|-----|-------------|
|
|
||||||
| service.manager.nodePort | NodePort port number (to set explicitly, choose port between 30000-32767) |
|
|
||||||
| service.manager.type | Define Longhorn manager service type. |
|
|
||||||
| service.ui.nodePort | NodePort port number (to set explicitly, choose port between 30000-32767) |
|
|
||||||
| service.ui.type | Define Longhorn UI service type. Options: `ClusterIP`, `NodePort`, `LoadBalancer`, `Rancher-Proxy` |
|
|
||||||
|
|
||||||
### StorageClass Settings
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
| persistence.backingImage.dataSourceParameters | string | `nil` | Specify the data source parameters for the backing image used in Longhorn StorageClass. This option accepts a json string of a map. e.g., `'{\"url\":\"https://backing-image-example.s3-region.amazonaws.com/test-backing-image\"}'`. |
|
|
||||||
| persistence.backingImage.dataSourceType | string | `nil` | Specify the data source type for the backing image used in Longhorn StorageClass. If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image. |
|
|
||||||
| persistence.backingImage.enable | bool | `false` | Set backing image for Longhorn StorageClass |
|
|
||||||
| persistence.backingImage.expectedChecksum | string | `nil` | Specify the expected SHA512 checksum of the selected backing image in Longhorn StorageClass |
|
|
||||||
| persistence.backingImage.name | string | `nil` | Specify a backing image that will be used by Longhorn volumes in Longhorn StorageClass. If not exists, the backing image data source type and backing image data source parameters should be specified so that Longhorn will create the backing image before using it |
|
|
||||||
| persistence.defaultClass | bool | `true` | Set Longhorn StorageClass as default |
|
|
||||||
| persistence.defaultClassReplicaCount | int | `3` | Set replica count for Longhorn StorageClass |
|
|
||||||
| persistence.defaultDataLocality | string | `"disabled"` | Set data locality for Longhorn StorageClass. Options: `disabled`, `best-effort` |
|
|
||||||
| persistence.defaultFsType | string | `"ext4"` | Set filesystem type for Longhorn StorageClass |
|
|
||||||
| persistence.defaultMkfsParams | string | `""` | Set mkfs options for Longhorn StorageClass |
|
|
||||||
| persistence.defaultNodeSelector.enable | bool | `false` | Enable Node selector for Longhorn StorageClass |
|
|
||||||
| persistence.defaultNodeSelector.selector | string | `""` | This selector enables only certain nodes having these tags to be used for the volume. e.g. `"storage,fast"` |
|
|
||||||
| persistence.migratable | bool | `false` | Set volume migratable for Longhorn StorageClass |
|
|
||||||
| persistence.reclaimPolicy | string | `"Delete"` | Define reclaim policy. Options: `Retain`, `Delete` |
|
|
||||||
| persistence.recurringJobSelector.enable | bool | `false` | Enable recurring job selector for Longhorn StorageClass |
|
|
||||||
| persistence.recurringJobSelector.jobList | list | `[]` | Recurring job selector list for Longhorn StorageClass. Please be careful of quotes of input. e.g., `[{"name":"backup", "isGroup":true}]` |
|
|
||||||
| persistence.removeSnapshotsDuringFilesystemTrim | string | `"ignored"` | Allow automatically removing snapshots during filesystem trim for Longhorn StorageClass. Options: `ignored`, `enabled`, `disabled` |
|
|
||||||
|
|
||||||
### CSI Settings
|
|
||||||
|
|
||||||
| Key | Description |
|
|
||||||
|-----|-------------|
|
|
||||||
| csi.attacherReplicaCount | Specify replica count of CSI Attacher. Leave blank to use default count: 3 |
|
|
||||||
| csi.kubeletRootDir | Specify kubelet root-dir. Leave blank to autodetect |
|
|
||||||
| csi.provisionerReplicaCount | Specify replica count of CSI Provisioner. Leave blank to use default count: 3 |
|
|
||||||
| csi.resizerReplicaCount | Specify replica count of CSI Resizer. Leave blank to use default count: 3 |
|
|
||||||
| csi.snapshotterReplicaCount | Specify replica count of CSI Snapshotter. Leave blank to use default count: 3 |
|
|
||||||
|
|
||||||
### Longhorn Manager Settings
|
|
||||||
|
|
||||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
|
||||||
These settings only apply to Longhorn manager component.
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
| longhornManager.log.format | string | `"plain"` | Options: `plain`, `json` |
|
|
||||||
| longhornManager.nodeSelector | object | `{}` | Select nodes to run Longhorn manager |
|
|
||||||
| longhornManager.priorityClass | string | `nil` | Priority class for longhorn manager |
|
|
||||||
| longhornManager.serviceAnnotations | object | `{}` | Annotation used in Longhorn manager service |
|
|
||||||
| longhornManager.tolerations | list | `[]` | Tolerate nodes to run Longhorn manager |
|
|
||||||
|
|
||||||
### Longhorn Driver Settings
|
|
||||||
|
|
||||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
|
||||||
These settings only apply to Longhorn driver component.
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
| longhornDriver.nodeSelector | object | `{}` | Select nodes to run Longhorn driver |
|
|
||||||
| longhornDriver.priorityClass | string | `nil` | Priority class for longhorn driver |
|
|
||||||
| longhornDriver.tolerations | list | `[]` | Tolerate nodes to run Longhorn driver |
|
|
||||||
|
|
||||||
### Longhorn UI Settings
|
|
||||||
|
|
||||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
|
||||||
These settings only apply to Longhorn UI component.
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
| longhornUI.nodeSelector | object | `{}` | Select nodes to run Longhorn UI |
|
|
||||||
| longhornUI.priorityClass | string | `nil` | Priority class count for longhorn ui |
|
|
||||||
| longhornUI.replicas | int | `2` | Replica count for longhorn ui |
|
|
||||||
| longhornUI.tolerations | list | `[]` | Tolerate nodes to run Longhorn UI |
|
|
||||||
|
|
||||||
### Ingress Settings
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
| ingress.annotations | string | `nil` | Ingress annotations done as key:value pairs |
|
|
||||||
| ingress.enabled | bool | `false` | Set to true to enable ingress record generation |
|
|
||||||
| ingress.host | string | `"sslip.io"` | Layer 7 Load Balancer hostname |
|
|
||||||
| ingress.ingressClassName | string | `nil` | Add ingressClassName to the Ingress Can replace the kubernetes.io/ingress.class annotation on v1.18+ |
|
|
||||||
| ingress.path | string | `"/"` | If ingress is enabled you can set the default ingress path then you can access the UI by using the following full path {{host}}+{{path}} |
|
|
||||||
| ingress.secrets | string | `nil` | If you're providing your own certificates, please use this to add the certificates as secrets |
|
|
||||||
| ingress.secureBackends | bool | `false` | Enable this in order to enable that the backend service will be connected at port 443 |
|
|
||||||
| ingress.tls | bool | `false` | Set this to true in order to enable TLS on the ingress record |
|
|
||||||
| ingress.tlsSecret | string | `"longhorn.local-tls"` | If TLS is set to true, you must declare what secret will store the key/certificate for TLS |
|
|
||||||
|
|
||||||
### Private Registry Settings
|
|
||||||
|
|
||||||
Longhorn can be installed in an air gapped environment with private registry settings. Please refer to **Air Gap Installation** in our official site [link](https://longhorn.io/docs)
|
|
||||||
|
|
||||||
| Key | Description |
|
|
||||||
|-----|-------------|
|
|
||||||
| privateRegistry.createSecret | Set `true` to create a new private registry secret |
|
|
||||||
| privateRegistry.registryPasswd | Password used to authenticate to private registry |
|
|
||||||
| privateRegistry.registrySecret | If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry |
|
|
||||||
| privateRegistry.registryUrl | URL of private registry. Leave blank to apply system default registry |
|
|
||||||
| privateRegistry.registryUser | User used to authenticate to private registry |
|
|
||||||
|
|
||||||
### OS/Kubernetes Distro Settings
|
|
||||||
|
|
||||||
#### Opensift Settings
|
|
||||||
|
|
||||||
Please also refer to this document [ocp-readme](https://github.com/longhorn/longhorn/blob/master/chart/ocp-readme.md) for more details
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
| openshift.enabled | bool | `false` | Enable when using openshift |
|
|
||||||
| openshift.ui.port | int | `443` | UI port in openshift environment |
|
|
||||||
| openshift.ui.proxy | int | `8443` | UI proxy in openshift environment |
|
|
||||||
| openshift.ui.route | string | `"longhorn-ui"` | UI route in openshift environment |
|
|
||||||
|
|
||||||
### Other Settings
|
|
||||||
|
|
||||||
| Key | Default | Description |
|
|
||||||
|-----|---------|-------------|
|
|
||||||
| annotations | `{}` | Annotations to add to the Longhorn Manager DaemonSet Pods. Optional. |
|
|
||||||
| enablePSP | `false` | For Kubernetes < v1.25, if your cluster enables Pod Security Policy admission controller, set this to `true` to ship longhorn-psp which allow privileged Longhorn pods to start |
|
|
||||||
|
|
||||||
### System Default Settings
|
|
||||||
|
|
||||||
For system default settings, you can first leave blank to use default values which will be applied when installing Longhorn.
|
|
||||||
You can then change them through UI after installation.
|
|
||||||
For more details like types or options, you can refer to **Settings Reference** in our official site [link](https://longhorn.io/docs)
|
|
||||||
|
|
||||||
| Key | Description |
|
|
||||||
|-----|-------------|
|
|
||||||
| defaultSettings.allowEmptyDiskSelectorVolume | Allow Scheduling Empty Disk Selector Volumes To Any Disk |
|
|
||||||
| defaultSettings.allowEmptyNodeSelectorVolume | Allow Scheduling Empty Node Selector Volumes To Any Node |
|
|
||||||
| defaultSettings.allowRecurringJobWhileVolumeDetached | If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup when it is the time to do recurring snapshot/backup. |
|
|
||||||
| defaultSettings.allowVolumeCreationWithDegradedAvailability | This setting allows user to create and attach a volume that doesn't have all the replicas scheduled at the time of creation. |
|
|
||||||
| defaultSettings.autoCleanupSystemGeneratedSnapshot | This setting enables Longhorn to automatically cleanup the system generated snapshot after replica rebuild is done. |
|
|
||||||
| defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly | If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...) when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect). By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount. |
|
|
||||||
| defaultSettings.autoSalvage | If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection. Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true. |
|
|
||||||
| defaultSettings.backingImageCleanupWaitInterval | This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it. |
|
|
||||||
| defaultSettings.backingImageRecoveryWaitInterval | This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file when all disk files of this backing image become failed or unknown. |
|
|
||||||
| defaultSettings.backupCompressionMethod | This setting allows users to specify backup compression method. |
|
|
||||||
| defaultSettings.backupConcurrentLimit | This setting controls how many worker threads per backup concurrently. |
|
|
||||||
| defaultSettings.backupTarget | The endpoint used to access the backupstore. Available: NFS, CIFS, AWS, GCP, AZURE. |
|
|
||||||
| defaultSettings.backupTargetCredentialSecret | The name of the Kubernetes secret associated with the backup target. |
|
|
||||||
| defaultSettings.backupstorePollInterval | In seconds. The backupstore poll interval determines how often Longhorn checks the backupstore for new backups. Set to 0 to disable the polling. By default 300. |
|
|
||||||
| defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit | This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version. |
|
|
||||||
| defaultSettings.concurrentReplicaRebuildPerNodeLimit | This setting controls how many replicas on a node can be rebuilt simultaneously. |
|
|
||||||
| defaultSettings.concurrentVolumeBackupRestorePerNodeLimit | This setting controls how many volumes on a node can restore the backup concurrently. Set the value to **0** to disable backup restore. |
|
|
||||||
| defaultSettings.createDefaultDiskLabeledNodes | Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other disks exist. If disabled, the default disk will be created on all new nodes when each node is first added. |
|
|
||||||
| defaultSettings.defaultDataLocality | Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume. |
|
|
||||||
| defaultSettings.defaultDataPath | Default path to use for storing data on a host. By default "/var/lib/longhorn/" |
|
|
||||||
| defaultSettings.defaultLonghornStaticStorageClass | The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label, so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object. By default 'longhorn-static'. |
|
|
||||||
| defaultSettings.defaultReplicaCount | The default number of replicas when a volume is created from the Longhorn UI. For Kubernetes configuration, update the `numberOfReplicas` in the StorageClass. By default 3. |
|
|
||||||
| defaultSettings.deletingConfirmationFlag | This flag is designed to prevent Longhorn from being accidentally uninstalled which will lead to data lost. |
|
|
||||||
| defaultSettings.disableRevisionCounter | This setting is only for volumes created by UI. By default, this is false meaning there will be a reivision counter file to track every write to the volume. During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume. If revision counter is disabled, Longhorn will not track every write to the volume. During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and file size to pick the replica candidate to recover the whole volume. |
|
|
||||||
| defaultSettings.disableSchedulingOnCordonedNode | Disable Longhorn manager to schedule replica on Kubernetes cordoned node. By default true. |
|
|
||||||
| defaultSettings.engineReplicaTimeout | In seconds. The setting specifies the timeout between the engine and replica(s), and the value should be between 8 to 30 seconds. The default value is 8 seconds. |
|
|
||||||
| defaultSettings.failedBackupTTL | In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion. |
|
|
||||||
| defaultSettings.fastReplicaRebuildEnabled | This feature supports the fast replica rebuilding. It relies on the checksum of snapshot disk files, so setting the snapshot-data-integrity to **enable** or **fast-check** is a prerequisite. |
|
|
||||||
| defaultSettings.guaranteedInstanceManagerCPU | This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod. You can leave it with the default value, which is 12%. |
|
|
||||||
| defaultSettings.kubernetesClusterAutoscalerEnabled | Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler. |
|
|
||||||
| defaultSettings.logLevel | The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. Default to Info. |
|
|
||||||
| defaultSettings.nodeDownPodDeletionPolicy | Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down. |
|
|
||||||
| defaultSettings.nodeDrainPolicy | Define the policy to use when a node with the last healthy replica of a volume is drained. |
|
|
||||||
| defaultSettings.offlineReplicaRebuilding | This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine. |
|
|
||||||
| defaultSettings.orphanAutoDeletion | This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas. Orphan resources on down or unknown nodes will not be cleaned up automatically. |
|
|
||||||
| defaultSettings.priorityClass | priorityClass for longhorn system componentss |
|
|
||||||
| defaultSettings.recurringFailedJobsHistoryLimit | This setting specifies how many failed backup or snapshot job histories should be retained. History will not be retained if the value is 0. |
|
|
||||||
| defaultSettings.recurringSuccessfulJobsHistoryLimit | This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0. |
|
|
||||||
| defaultSettings.removeSnapshotsDuringFilesystemTrim | This setting allows Longhorn filesystem trim feature to automatically mark the latest snapshot and its ancestors as removed and stops at the snapshot containing multiple children. |
|
|
||||||
| defaultSettings.replicaAutoBalance | Enable this setting automatically rebalances replicas when discovered an available node. |
|
|
||||||
| defaultSettings.replicaDiskSoftAntiAffinity | Allow scheduling on disks with existing healthy replicas of the same volume. By default true. |
|
|
||||||
| defaultSettings.replicaFileSyncHttpClientTimeout | In seconds. The setting specifies the HTTP client timeout to the file sync server. |
|
|
||||||
| defaultSettings.replicaReplenishmentWaitInterval | In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica rather than directly creating a new replica for a degraded volume. |
|
|
||||||
| defaultSettings.replicaSoftAntiAffinity | Allow scheduling on nodes with existing healthy replicas of the same volume. By default false. |
|
|
||||||
| defaultSettings.replicaZoneSoftAntiAffinity | Allow scheduling new Replicas of Volume to the Nodes in the same Zone as existing healthy Replicas. Nodes don't belong to any Zone will be treated as in the same Zone. Notice that Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` in the Kubernetes node object to identify the zone. By default true. |
|
|
||||||
| defaultSettings.restoreConcurrentLimit | This setting controls how many worker threads per restore concurrently. |
|
|
||||||
| defaultSettings.restoreVolumeRecurringJobs | Restore recurring jobs from the backup volume on the backup target and create recurring jobs if not exist during a backup restoration. |
|
|
||||||
| defaultSettings.snapshotDataIntegrity | This setting allows users to enable or disable snapshot hashing and data integrity checking. |
|
|
||||||
| defaultSettings.snapshotDataIntegrityCronjob | Unix-cron string format. The setting specifies when Longhorn checks the data integrity of snapshot disk files. |
|
|
||||||
| defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation | Hashing snapshot disk files impacts the performance of the system. The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot. |
|
|
||||||
| defaultSettings.storageMinimalAvailablePercentage | If the minimum available disk capacity exceeds the actual percentage of available disk capacity, the disk becomes unschedulable until more space is freed up. By default 25. |
|
|
||||||
| defaultSettings.storageNetwork | Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network. |
|
|
||||||
| defaultSettings.storageOverProvisioningPercentage | The over-provisioning percentage defines how much storage can be allocated relative to the hard drive's capacity. By default 200. |
|
|
||||||
| defaultSettings.storageReservedPercentageForDefaultDisk | The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node. |
|
|
||||||
| defaultSettings.supportBundleFailedHistoryLimit | This setting specifies how many failed support bundles can exist in the cluster. Set this value to **0** to have Longhorn automatically purge all failed support bundles. |
|
|
||||||
| defaultSettings.systemManagedComponentsNodeSelector | nodeSelector for longhorn system components |
|
|
||||||
| defaultSettings.systemManagedPodsImagePullPolicy | This setting defines the Image Pull Policy of Longhorn system managed pod. e.g. instance manager, engine image, CSI driver, etc. The new Image Pull Policy will only apply after the system managed pods restart. |
|
|
||||||
| defaultSettings.taintToleration | taintToleration for longhorn system components |
|
|
||||||
| defaultSettings.upgradeChecker | Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, a notification will appear in the UI. By default true. |
|
|
||||||
| defaultSettings.v2DataEngine | This allows users to activate v2 data engine based on SPDK. Currently, it is in the preview phase and should not be utilized in a production environment. |
|
|
||||||
|
|
||||||
---
|
---
|
||||||
Please see [link](https://github.com/longhorn/longhorn) for more information.
|
Please see [link](https://github.com/longhorn/longhorn) for more information.
|
||||||
|
@ -1,253 +0,0 @@
|
|||||||
# Longhorn Chart
|
|
||||||
|
|
||||||
> **Important**: Please install the Longhorn chart in the `longhorn-system` namespace only.
|
|
||||||
|
|
||||||
> **Warning**: Longhorn doesn't support downgrading from a higher version to a lower version.
|
|
||||||
|
|
||||||
## Source Code
|
|
||||||
|
|
||||||
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
|
||||||
|
|
||||||
1. Longhorn Engine -- Core controller/replica logic https://github.com/longhorn/longhorn-engine
|
|
||||||
2. Longhorn Instance Manager -- Controller/replica instance lifecycle management https://github.com/longhorn/longhorn-instance-manager
|
|
||||||
3. Longhorn Share Manager -- NFS provisioner that exposes Longhorn volumes as ReadWriteMany volumes. https://github.com/longhorn/longhorn-share-manager
|
|
||||||
4. Backing Image Manager -- Backing image file lifecycle management. https://github.com/longhorn/backing-image-manager
|
|
||||||
5. Longhorn Manager -- Longhorn orchestration, includes CSI driver for Kubernetes https://github.com/longhorn/longhorn-manager
|
|
||||||
6. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
1. A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.)
|
|
||||||
2. Kubernetes >= v1.21
|
|
||||||
3. Make sure `bash`, `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
|
|
||||||
4. Make sure `open-iscsi` has been installed, and the `iscsid` daemon is running on all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
|
|
||||||
|
|
||||||
## Upgrading to Kubernetes v1.25+
|
|
||||||
|
|
||||||
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
|
|
||||||
|
|
||||||
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `enablePSP` set to `false` if it has been previously set to `true`.
|
|
||||||
|
|
||||||
> **Note:**
|
|
||||||
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
|
|
||||||
>
|
|
||||||
> If your charts get stuck in this state, you may have to clean up your Helm release secrets.
|
|
||||||
Upon setting `enablePSP` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
|
|
||||||
|
|
||||||
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Longhorn docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
1. Add Longhorn chart repository.
|
|
||||||
```
|
|
||||||
helm repo add longhorn https://charts.longhorn.io
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Update local Longhorn chart information from chart repository.
|
|
||||||
```
|
|
||||||
helm repo update
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Install Longhorn chart.
|
|
||||||
- With Helm 2, the following command will create the `longhorn-system` namespace and install the Longhorn chart together.
|
|
||||||
```
|
|
||||||
helm install longhorn/longhorn --name longhorn --namespace longhorn-system
|
|
||||||
```
|
|
||||||
- With Helm 3, the following commands will create the `longhorn-system` namespace first, then install the Longhorn chart.
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl create namespace longhorn-system
|
|
||||||
helm install longhorn longhorn/longhorn --namespace longhorn-system
|
|
||||||
```
|
|
||||||
|
|
||||||
## Uninstallation
|
|
||||||
|
|
||||||
With Helm 2 to uninstall Longhorn.
|
|
||||||
```
|
|
||||||
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
|
||||||
helm delete longhorn --purge
|
|
||||||
```
|
|
||||||
|
|
||||||
With Helm 3 to uninstall Longhorn.
|
|
||||||
```
|
|
||||||
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
|
||||||
helm uninstall longhorn -n longhorn-system
|
|
||||||
kubectl delete namespace longhorn-system
|
|
||||||
```
|
|
||||||
|
|
||||||
## Values
|
|
||||||
|
|
||||||
The `values.yaml` contains items used to tweak a deployment of this chart.
|
|
||||||
|
|
||||||
### Cattle Settings
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "global" .Key }}
|
|
||||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### Network Policies
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "networkPolicies" .Key }}
|
|
||||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### Image Settings
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "image" .Key }}
|
|
||||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### Service Settings
|
|
||||||
|
|
||||||
| Key | Description |
|
|
||||||
|-----|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if (and (hasPrefix "service" .Key) (not (contains "Account" .Key))) }}
|
|
||||||
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### StorageClass Settings
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "persistence" .Key }}
|
|
||||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### CSI Settings
|
|
||||||
|
|
||||||
| Key | Description |
|
|
||||||
|-----|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "csi" .Key }}
|
|
||||||
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### Longhorn Manager Settings
|
|
||||||
|
|
||||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
|
||||||
These settings only apply to Longhorn manager component.
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "longhornManager" .Key }}
|
|
||||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### Longhorn Driver Settings
|
|
||||||
|
|
||||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
|
||||||
These settings only apply to Longhorn driver component.
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "longhornDriver" .Key }}
|
|
||||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### Longhorn UI Settings
|
|
||||||
|
|
||||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
|
||||||
These settings only apply to Longhorn UI component.
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "longhornUI" .Key }}
|
|
||||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### Ingress Settings
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "ingress" .Key }}
|
|
||||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### Private Registry Settings
|
|
||||||
|
|
||||||
Longhorn can be installed in an air gapped environment with private registry settings. Please refer to **Air Gap Installation** in our official site [link](https://longhorn.io/docs)
|
|
||||||
|
|
||||||
| Key | Description |
|
|
||||||
|-----|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "privateRegistry" .Key }}
|
|
||||||
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### OS/Kubernetes Distro Settings
|
|
||||||
|
|
||||||
#### Opensift Settings
|
|
||||||
|
|
||||||
Please also refer to this document [ocp-readme](https://github.com/longhorn/longhorn/blob/master/chart/ocp-readme.md) for more details
|
|
||||||
|
|
||||||
| Key | Type | Default | Description |
|
|
||||||
|-----|------|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "openshift" .Key }}
|
|
||||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### Other Settings
|
|
||||||
|
|
||||||
| Key | Default | Description |
|
|
||||||
|-----|---------|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if not (or (hasPrefix "defaultSettings" .Key)
|
|
||||||
(hasPrefix "networkPolicies" .Key)
|
|
||||||
(hasPrefix "image" .Key)
|
|
||||||
(hasPrefix "service" .Key)
|
|
||||||
(hasPrefix "persistence" .Key)
|
|
||||||
(hasPrefix "csi" .Key)
|
|
||||||
(hasPrefix "longhornManager" .Key)
|
|
||||||
(hasPrefix "longhornDriver" .Key)
|
|
||||||
(hasPrefix "longhornUI" .Key)
|
|
||||||
(hasPrefix "privateRegistry" .Key)
|
|
||||||
(hasPrefix "ingress" .Key)
|
|
||||||
(hasPrefix "openshift" .Key)
|
|
||||||
(hasPrefix "global" .Key)) }}
|
|
||||||
| {{ .Key }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
### System Default Settings
|
|
||||||
|
|
||||||
For system default settings, you can first leave blank to use default values which will be applied when installing Longhorn.
|
|
||||||
You can then change them through UI after installation.
|
|
||||||
For more details like types or options, you can refer to **Settings Reference** in our official site [link](https://longhorn.io/docs)
|
|
||||||
|
|
||||||
| Key | Description |
|
|
||||||
|-----|-------------|
|
|
||||||
{{- range .Values }}
|
|
||||||
{{- if hasPrefix "defaultSettings" .Key }}
|
|
||||||
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
---
|
|
||||||
Please see [link](https://github.com/longhorn/longhorn) for more information.
|
|
@ -4,8 +4,4 @@ Longhorn is a lightweight, reliable and easy to use distributed block storage sy
|
|||||||
|
|
||||||
Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Longhorn supports snapshots, backups and even allows you to schedule recurring snapshots and backups!
|
Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Longhorn supports snapshots, backups and even allows you to schedule recurring snapshots and backups!
|
||||||
|
|
||||||
**Important**: Please install Longhorn chart in `longhorn-system` namespace only.
|
[Chart Documentation](https://github.com/rancher/longhorn/blob/master/docs/chart.md)
|
||||||
|
|
||||||
**Warning**: Longhorn doesn't support downgrading from a higher version to a lower version.
|
|
||||||
|
|
||||||
[Chart Documentation](https://github.com/longhorn/longhorn/blob/master/chart/README.md)
|
|
||||||
|
@ -1,177 +0,0 @@
|
|||||||
# OpenShift / OKD Extra Configuration Steps
|
|
||||||
|
|
||||||
- [OpenShift / OKD Extra Configuration Steps](#openshift--okd-extra-configuration-steps)
|
|
||||||
- [Notes](#notes)
|
|
||||||
- [Known Issues](#known-issues)
|
|
||||||
- [Preparing Nodes (Optional)](#preparing-nodes-optional)
|
|
||||||
- [Default /var/lib/longhorn setup](#default-varliblonghorn-setup)
|
|
||||||
- [Separate /var/mnt/longhorn setup](#separate-varmntlonghorn-setup)
|
|
||||||
- [Create Filesystem](#create-filesystem)
|
|
||||||
- [Mounting Disk On Boot](#mounting-disk-on-boot)
|
|
||||||
- [Label and Annotate Nodes](#label-and-annotate-nodes)
|
|
||||||
- [Example values.yaml](#example-valuesyaml)
|
|
||||||
- [Installation](#installation)
|
|
||||||
- [Refs](#refs)
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
Main changes and tasks for OCP are:
|
|
||||||
|
|
||||||
- On OCP / OKD, the Operating System is Managed by the Cluster
|
|
||||||
- OCP Imposes [Security Context Constraints](https://docs.openshift.com/container-platform/4.11/authentication/managing-security-context-constraints.html)
|
|
||||||
- This requires everything to run with the least privilege possible. For the moment every component has been given access to run as higher privilege.
|
|
||||||
- Something to circle back on is network polices and which components can have their privileges reduced without impacting functionality.
|
|
||||||
- The UI probably can be for example.
|
|
||||||
- openshift/oauth-proxy for authentication to the Longhorn Ui
|
|
||||||
- **⚠️** Currently Scoped to Authenticated Users that can delete a longhorn settings object.
|
|
||||||
- **⚠️** Since the UI it self is not protected, network policies will need to be created to prevent namespace <--> namespace communication against the pod or service object directly.
|
|
||||||
- Anyone with access to the UI Deployment can remove the route restriction. (Namespace Scoped Admin)
|
|
||||||
- Option to use separate disk in /var/mnt/longhorn & MachineConfig file to mount /var/mnt/longhorn
|
|
||||||
- Adding finalizers for mount propagation
|
|
||||||
|
|
||||||
## Known Issues
|
|
||||||
|
|
||||||
- General Feature/Issue Thread
|
|
||||||
- [[FEATURE] Deploying Longhorn on OKD/Openshift](https://github.com/longhorn/longhorn/issues/1831)
|
|
||||||
- 4.10 / 1.23:
|
|
||||||
- 4.10.0-0.okd-2022-03-07-131213 to 4.10.0-0.okd-2022-07-09-073606
|
|
||||||
- Tested, No Known Issues
|
|
||||||
- 4.11 / 1.24:
|
|
||||||
- 4.11.0-0.okd-2022-07-27-052000 to 4.11.0-0.okd-2022-11-19-050030
|
|
||||||
- Tested, No Known Issues
|
|
||||||
- 4.11.0-0.okd-2022-12-02-145640, 4.11.0-0.okd-2023-01-14-152430:
|
|
||||||
- Workaround: [[BUG] Volumes Stuck in Attach/Detach Loop](https://github.com/longhorn/longhorn/issues/4988)
|
|
||||||
- [MachineConfig Patch](https://github.com/longhorn/longhorn/issues/4988#issuecomment-1345676772)
|
|
||||||
- 4.12 / 1.25:
|
|
||||||
- 4.12.0-0.okd-2022-12-05-210624 to 4.12.0-0.okd-2023-01-20-101927
|
|
||||||
- Tested, No Known Issues
|
|
||||||
- 4.12.0-0.okd-2023-01-21-055900 to 4.12.0-0.okd-2023-02-18-033438:
|
|
||||||
- Workaround: [[BUG] Volumes Stuck in Attach/Detach Loop](https://github.com/longhorn/longhorn/issues/4988)
|
|
||||||
- [MachineConfig Patch](https://github.com/longhorn/longhorn/issues/4988#issuecomment-1345676772)
|
|
||||||
- 4.12.0-0.okd-2023-03-05-022504 - 4.12.0-0.okd-2023-04-16-041331:
|
|
||||||
- Tested, No Known Issues
|
|
||||||
- 4.13 / 1.26:
|
|
||||||
- 4.13.0-0.okd-2023-05-03-001308 - 4.13.0-0.okd-2023-08-18-135805:
|
|
||||||
- Tested, No Known Issues
|
|
||||||
- 4.14 / 1.27:
|
|
||||||
- 4.14.0-0.okd-2023-08-12-022330 - 4.14.0-0.okd-2023-10-28-073550:
|
|
||||||
- Tested, No Known Issues
|
|
||||||
|
|
||||||
## Preparing Nodes (Optional)
|
|
||||||
|
|
||||||
Only required if you require additional customizations, such as storage-less nodes, or secondary disks.
|
|
||||||
|
|
||||||
### Default /var/lib/longhorn setup
|
|
||||||
|
|
||||||
Label each node for storage with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
oc get nodes --no-headers | awk '{print $1}'
|
|
||||||
|
|
||||||
export NODE="worker-0"
|
|
||||||
oc label node "${NODE}" node.longhorn.io/create-default-disk=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### Separate /var/mnt/longhorn setup
|
|
||||||
|
|
||||||
#### Create Filesystem
|
|
||||||
|
|
||||||
On the storage nodes create a filesystem with the label longhorn:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
oc get nodes --no-headers | awk '{print $1}'
|
|
||||||
|
|
||||||
export NODE="worker-0"
|
|
||||||
oc debug node/${NODE} -t -- chroot /host bash
|
|
||||||
|
|
||||||
# Validate Target Drive is Present
|
|
||||||
lsblk
|
|
||||||
|
|
||||||
export DRIVE="sdb" #vdb
|
|
||||||
sudo mkfs.ext4 -L longhorn /dev/${DRIVE}
|
|
||||||
```
|
|
||||||
|
|
||||||
> ⚠️ Note: If you add New Nodes After the below Machine Config is applied, you will need to also reboot the node.
|
|
||||||
|
|
||||||
#### Mounting Disk On Boot
|
|
||||||
|
|
||||||
The Secondary Drive needs to be mounted on every boot. Save the Concents and Apply the MachineConfig with `oc apply -f`:
|
|
||||||
|
|
||||||
> ⚠️ This will trigger an machine config profile update and reboot all worker nodes on the cluster
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: machineconfiguration.openshift.io/v1
|
|
||||||
kind: MachineConfig
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
machineconfiguration.openshift.io/role: worker
|
|
||||||
name: 71-mount-storage-worker
|
|
||||||
spec:
|
|
||||||
config:
|
|
||||||
ignition:
|
|
||||||
version: 3.2.0
|
|
||||||
systemd:
|
|
||||||
units:
|
|
||||||
- name: var-mnt-longhorn.mount
|
|
||||||
enabled: true
|
|
||||||
contents: |
|
|
||||||
[Unit]
|
|
||||||
Before=local-fs.target
|
|
||||||
[Mount]
|
|
||||||
Where=/var/mnt/longhorn
|
|
||||||
What=/dev/disk/by-label/longhorn
|
|
||||||
Options=rw,relatime,discard
|
|
||||||
[Install]
|
|
||||||
WantedBy=local-fs.target
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Label and Annotate Nodes
|
|
||||||
|
|
||||||
Label and annotate storage nodes like this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
oc get nodes --no-headers | awk '{print $1}'
|
|
||||||
|
|
||||||
export NODE="worker-0"
|
|
||||||
oc annotate node ${NODE} --overwrite node.longhorn.io/default-disks-config='[{"path":"/var/mnt/longhorn","allowScheduling":true}]'
|
|
||||||
oc label node ${NODE} node.longhorn.io/create-default-disk=config
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example values.yaml
|
|
||||||
|
|
||||||
Minimum Adjustments Required
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
openshift:
|
|
||||||
oauthProxy:
|
|
||||||
repository: quay.io/openshift/origin-oauth-proxy
|
|
||||||
tag: 4.14 # Use Your OCP/OKD 4.X Version, Current Stable is 4.14
|
|
||||||
|
|
||||||
# defaultSettings: # Preparing nodes (Optional)
|
|
||||||
# createDefaultDiskLabeledNodes: true
|
|
||||||
|
|
||||||
openshift:
|
|
||||||
enabled: true
|
|
||||||
ui:
|
|
||||||
route: "longhorn-ui"
|
|
||||||
port: 443
|
|
||||||
proxy: 8443
|
|
||||||
```
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# helm template ./chart/ --namespace longhorn-system --values ./chart/values.yaml --no-hooks > longhorn.yaml # Local Testing
|
|
||||||
helm template longhorn --namespace longhorn-system --values values.yaml --no-hooks > longhorn.yaml
|
|
||||||
oc create namespace longhorn-system -o yaml --dry-run=client | oc apply -f -
|
|
||||||
oc apply -f longhorn.yaml -n longhorn-system
|
|
||||||
```
|
|
||||||
|
|
||||||
## Refs
|
|
||||||
|
|
||||||
- <https://docs.openshift.com/container-platform/4.11/storage/persistent_storage/persistent-storage-iscsi.html>
|
|
||||||
- <https://docs.okd.io/4.11/storage/persistent_storage/persistent-storage-iscsi.html>
|
|
||||||
- okd 4.5: <https://github.com/longhorn/longhorn/issues/1831#issuecomment-702690613>
|
|
||||||
- okd 4.6: <https://github.com/longhorn/longhorn/issues/1831#issuecomment-765884631>
|
|
||||||
- oauth-proxy: <https://github.com/openshift/oauth-proxy/blob/master/contrib/sidecar.yaml>
|
|
||||||
- <https://github.com/longhorn/longhorn/issues/1831>
|
|
@ -1,825 +0,0 @@
|
|||||||
categories:
|
|
||||||
- storage
|
|
||||||
namespace: longhorn-system
|
|
||||||
questions:
|
|
||||||
- variable: image.defaultImage
|
|
||||||
default: "true"
|
|
||||||
description: "Use default Longhorn images"
|
|
||||||
label: Use Default Images
|
|
||||||
type: boolean
|
|
||||||
show_subquestion_if: false
|
|
||||||
group: "Longhorn Images"
|
|
||||||
subquestions:
|
|
||||||
- variable: image.longhorn.manager.repository
|
|
||||||
default: longhornio/longhorn-manager
|
|
||||||
description: "Specify Longhorn Manager Image Repository"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Manager Image Repository
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.manager.tag
|
|
||||||
default: master-head
|
|
||||||
description: "Specify Longhorn Manager Image Tag"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Manager Image Tag
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.engine.repository
|
|
||||||
default: longhornio/longhorn-engine
|
|
||||||
description: "Specify Longhorn Engine Image Repository"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Engine Image Repository
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.engine.tag
|
|
||||||
default: master-head
|
|
||||||
description: "Specify Longhorn Engine Image Tag"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Engine Image Tag
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.ui.repository
|
|
||||||
default: longhornio/longhorn-ui
|
|
||||||
description: "Specify Longhorn UI Image Repository"
|
|
||||||
type: string
|
|
||||||
label: Longhorn UI Image Repository
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.ui.tag
|
|
||||||
default: master-head
|
|
||||||
description: "Specify Longhorn UI Image Tag"
|
|
||||||
type: string
|
|
||||||
label: Longhorn UI Image Tag
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.instanceManager.repository
|
|
||||||
default: longhornio/longhorn-instance-manager
|
|
||||||
description: "Specify Longhorn Instance Manager Image Repository"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Instance Manager Image Repository
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.instanceManager.tag
|
|
||||||
default: v2_20221123
|
|
||||||
description: "Specify Longhorn Instance Manager Image Tag"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Instance Manager Image Tag
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.shareManager.repository
|
|
||||||
default: longhornio/longhorn-share-manager
|
|
||||||
description: "Specify Longhorn Share Manager Image Repository"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Share Manager Image Repository
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.shareManager.tag
|
|
||||||
default: v1_20220914
|
|
||||||
description: "Specify Longhorn Share Manager Image Tag"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Share Manager Image Tag
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.backingImageManager.repository
|
|
||||||
default: longhornio/backing-image-manager
|
|
||||||
description: "Specify Longhorn Backing Image Manager Image Repository"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Backing Image Manager Image Repository
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.backingImageManager.tag
|
|
||||||
default: v3_20220808
|
|
||||||
description: "Specify Longhorn Backing Image Manager Image Tag"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Backing Image Manager Image Tag
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.supportBundleKit.repository
|
|
||||||
default: longhornio/support-bundle-kit
|
|
||||||
description: "Specify Longhorn Support Bundle Manager Image Repository"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Support Bundle Kit Image Repository
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.longhorn.supportBundleKit.tag
|
|
||||||
default: v0.0.27
|
|
||||||
description: "Specify Longhorn Support Bundle Manager Image Tag"
|
|
||||||
type: string
|
|
||||||
label: Longhorn Support Bundle Kit Image Tag
|
|
||||||
group: "Longhorn Images Settings"
|
|
||||||
- variable: image.csi.attacher.repository
|
|
||||||
default: longhornio/csi-attacher
|
|
||||||
description: "Specify CSI attacher image repository. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Attacher Image Repository
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.attacher.tag
|
|
||||||
default: v4.2.0
|
|
||||||
description: "Specify CSI attacher image tag. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Attacher Image Tag
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.provisioner.repository
|
|
||||||
default: longhornio/csi-provisioner
|
|
||||||
description: "Specify CSI provisioner image repository. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Provisioner Image Repository
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.provisioner.tag
|
|
||||||
default: v3.4.1
|
|
||||||
description: "Specify CSI provisioner image tag. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Provisioner Image Tag
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.nodeDriverRegistrar.repository
|
|
||||||
default: longhornio/csi-node-driver-registrar
|
|
||||||
description: "Specify CSI Node Driver Registrar image repository. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Node Driver Registrar Image Repository
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.nodeDriverRegistrar.tag
|
|
||||||
default: v2.7.0
|
|
||||||
description: "Specify CSI Node Driver Registrar image tag. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Node Driver Registrar Image Tag
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.resizer.repository
|
|
||||||
default: longhornio/csi-resizer
|
|
||||||
description: "Specify CSI Driver Resizer image repository. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Driver Resizer Image Repository
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.resizer.tag
|
|
||||||
default: v1.7.0
|
|
||||||
description: "Specify CSI Driver Resizer image tag. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Driver Resizer Image Tag
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.snapshotter.repository
|
|
||||||
default: longhornio/csi-snapshotter
|
|
||||||
description: "Specify CSI Driver Snapshotter image repository. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Driver Snapshotter Image Repository
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.snapshotter.tag
|
|
||||||
default: v6.2.1
|
|
||||||
description: "Specify CSI Driver Snapshotter image tag. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Driver Snapshotter Image Tag
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.livenessProbe.repository
|
|
||||||
default: longhornio/livenessprobe
|
|
||||||
description: "Specify CSI liveness probe image repository. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Liveness Probe Image Repository
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: image.csi.livenessProbe.tag
|
|
||||||
default: v2.9.0
|
|
||||||
description: "Specify CSI liveness probe image tag. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Longhorn CSI Liveness Probe Image Tag
|
|
||||||
group: "Longhorn CSI Driver Images"
|
|
||||||
- variable: privateRegistry.registryUrl
|
|
||||||
label: Private registry URL
|
|
||||||
description: "URL of private registry. Leave blank to apply system default registry."
|
|
||||||
group: "Private Registry Settings"
|
|
||||||
type: string
|
|
||||||
default: ""
|
|
||||||
- variable: privateRegistry.registrySecret
|
|
||||||
label: Private registry secret name
|
|
||||||
description: "If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry."
|
|
||||||
group: "Private Registry Settings"
|
|
||||||
type: string
|
|
||||||
default: ""
|
|
||||||
- variable: privateRegistry.createSecret
|
|
||||||
default: "true"
|
|
||||||
description: "Create a new private registry secret"
|
|
||||||
type: boolean
|
|
||||||
group: "Private Registry Settings"
|
|
||||||
label: Create Secret for Private Registry Settings
|
|
||||||
show_subquestion_if: true
|
|
||||||
subquestions:
|
|
||||||
- variable: privateRegistry.registryUser
|
|
||||||
label: Private registry user
|
|
||||||
description: "User used to authenticate to private registry."
|
|
||||||
type: string
|
|
||||||
default: ""
|
|
||||||
- variable: privateRegistry.registryPasswd
|
|
||||||
label: Private registry password
|
|
||||||
description: "Password used to authenticate to private registry."
|
|
||||||
type: password
|
|
||||||
default: ""
|
|
||||||
- variable: longhorn.default_setting
|
|
||||||
default: "false"
|
|
||||||
description: "Customize the default settings before installing Longhorn for the first time. This option will only work if the cluster hasn't installed Longhorn."
|
|
||||||
label: "Customize Default Settings"
|
|
||||||
type: boolean
|
|
||||||
show_subquestion_if: true
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
subquestions:
|
|
||||||
- variable: csi.kubeletRootDir
|
|
||||||
default:
|
|
||||||
description: "Specify kubelet root-dir. Leave blank to autodetect."
|
|
||||||
type: string
|
|
||||||
label: Kubelet Root Directory
|
|
||||||
group: "Longhorn CSI Driver Settings"
|
|
||||||
- variable: csi.attacherReplicaCount
|
|
||||||
type: int
|
|
||||||
default: 3
|
|
||||||
min: 1
|
|
||||||
max: 10
|
|
||||||
description: "Specify replica count of CSI Attacher. By default 3."
|
|
||||||
label: Longhorn CSI Attacher replica count
|
|
||||||
group: "Longhorn CSI Driver Settings"
|
|
||||||
- variable: csi.provisionerReplicaCount
|
|
||||||
type: int
|
|
||||||
default: 3
|
|
||||||
min: 1
|
|
||||||
max: 10
|
|
||||||
description: "Specify replica count of CSI Provisioner. By default 3."
|
|
||||||
label: Longhorn CSI Provisioner replica count
|
|
||||||
group: "Longhorn CSI Driver Settings"
|
|
||||||
- variable: csi.resizerReplicaCount
|
|
||||||
type: int
|
|
||||||
default: 3
|
|
||||||
min: 1
|
|
||||||
max: 10
|
|
||||||
description: "Specify replica count of CSI Resizer. By default 3."
|
|
||||||
label: Longhorn CSI Resizer replica count
|
|
||||||
group: "Longhorn CSI Driver Settings"
|
|
||||||
- variable: csi.snapshotterReplicaCount
|
|
||||||
type: int
|
|
||||||
default: 3
|
|
||||||
min: 1
|
|
||||||
max: 10
|
|
||||||
description: "Specify replica count of CSI Snapshotter. By default 3."
|
|
||||||
label: Longhorn CSI Snapshotter replica count
|
|
||||||
group: "Longhorn CSI Driver Settings"
|
|
||||||
- variable: defaultSettings.backupTarget
|
|
||||||
label: Backup Target
|
|
||||||
description: "The endpoint used to access the backupstore. Available: NFS, CIFS, AWS, GCP, AZURE"
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: string
|
|
||||||
default:
|
|
||||||
- variable: defaultSettings.backupTargetCredentialSecret
|
|
||||||
label: Backup Target Credential Secret
|
|
||||||
description: "The name of the Kubernetes secret associated with the backup target."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: string
|
|
||||||
default:
|
|
||||||
- variable: defaultSettings.allowRecurringJobWhileVolumeDetached
|
|
||||||
label: Allow Recurring Job While Volume Is Detached
|
|
||||||
description: 'If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup when it is the time to do recurring snapshot/backup.'
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "false"
|
|
||||||
- variable: defaultSettings.createDefaultDiskLabeledNodes
|
|
||||||
label: Create Default Disk on Labeled Nodes
|
|
||||||
description: 'Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other disks exist. If disabled, the default disk will be created on all new nodes when each node is first added.'
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "false"
|
|
||||||
- variable: defaultSettings.defaultDataPath
|
|
||||||
label: Default Data Path
|
|
||||||
description: 'Default path to use for storing data on a host. By default "/var/lib/longhorn/"'
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: string
|
|
||||||
default: "/var/lib/longhorn/"
|
|
||||||
- variable: defaultSettings.defaultDataLocality
|
|
||||||
label: Default Data Locality
|
|
||||||
description: 'Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume.'
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "disabled"
|
|
||||||
- "best-effort"
|
|
||||||
default: "disabled"
|
|
||||||
- variable: defaultSettings.replicaSoftAntiAffinity
|
|
||||||
label: Replica Node Level Soft Anti-Affinity
|
|
||||||
description: 'Allow scheduling on nodes with existing healthy replicas of the same volume. By default false.'
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "false"
|
|
||||||
- variable: defaultSettings.replicaAutoBalance
|
|
||||||
label: Replica Auto Balance
|
|
||||||
description: 'Enable this setting automatically rebalances replicas when discovered an available node.'
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "disabled"
|
|
||||||
- "least-effort"
|
|
||||||
- "best-effort"
|
|
||||||
default: "disabled"
|
|
||||||
- variable: defaultSettings.storageOverProvisioningPercentage
|
|
||||||
label: Storage Over Provisioning Percentage
|
|
||||||
description: "The over-provisioning percentage defines how much storage can be allocated relative to the hard drive's capacity. By default 200."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 200
|
|
||||||
- variable: defaultSettings.storageMinimalAvailablePercentage
|
|
||||||
label: Storage Minimal Available Percentage
|
|
||||||
description: "If the minimum available disk capacity exceeds the actual percentage of available disk capacity, the disk becomes unschedulable until more space is freed up. By default 25."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
max: 100
|
|
||||||
default: 25
|
|
||||||
- variable: defaultSettings.storageReservedPercentageForDefaultDisk
|
|
||||||
label: Storage Reserved Percentage For Default Disk
|
|
||||||
description: "The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
max: 100
|
|
||||||
default: 30
|
|
||||||
- variable: defaultSettings.upgradeChecker
|
|
||||||
label: Enable Upgrade Checker
|
|
||||||
description: 'Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, a notification will appear in the UI. By default true.'
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.defaultReplicaCount
|
|
||||||
label: Default Replica Count
|
|
||||||
description: "The default number of replicas when a volume is created from the Longhorn UI. For Kubernetes configuration, update the `numberOfReplicas` in the StorageClass. By default 3."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 1
|
|
||||||
max: 20
|
|
||||||
default: 3
|
|
||||||
- variable: defaultSettings.defaultLonghornStaticStorageClass
|
|
||||||
label: Default Longhorn Static StorageClass Name
|
|
||||||
description: "The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label, so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object. By default 'longhorn-static'."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: string
|
|
||||||
default: "longhorn-static"
|
|
||||||
- variable: defaultSettings.backupstorePollInterval
|
|
||||||
label: Backupstore Poll Interval
|
|
||||||
description: "In seconds. The backupstore poll interval determines how often Longhorn checks the backupstore for new backups. Set to 0 to disable the polling. By default 300."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 300
|
|
||||||
- variable: defaultSettings.failedBackupTTL
|
|
||||||
label: Failed Backup Time to Live
|
|
||||||
description: "In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 1440
|
|
||||||
- variable: defaultSettings.restoreVolumeRecurringJobs
|
|
||||||
label: Restore Volume Recurring Jobs
|
|
||||||
description: "Restore recurring jobs from the backup volume on the backup target and create recurring jobs if not exist during a backup restoration."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "false"
|
|
||||||
- variable: defaultSettings.recurringSuccessfulJobsHistoryLimit
|
|
||||||
label: Cronjob Successful Jobs History Limit
|
|
||||||
description: "This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 1
|
|
||||||
- variable: defaultSettings.recurringFailedJobsHistoryLimit
|
|
||||||
label: Cronjob Failed Jobs History Limit
|
|
||||||
description: "This setting specifies how many failed backup or snapshot job histories should be retained. History will not be retained if the value is 0."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 1
|
|
||||||
- variable: defaultSettings.supportBundleFailedHistoryLimit
|
|
||||||
label: SupportBundle Failed History Limit
|
|
||||||
description: "This setting specifies how many failed support bundles can exist in the cluster. Set this value to **0** to have Longhorn automatically purge all failed support bundles."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 1
|
|
||||||
- variable: defaultSettings.autoSalvage
|
|
||||||
label: Automatic salvage
|
|
||||||
description: "If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection. Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly
|
|
||||||
label: Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly
|
|
||||||
description: 'If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...) when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect). By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount.'
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.disableSchedulingOnCordonedNode
|
|
||||||
label: Disable Scheduling On Cordoned Node
|
|
||||||
description: "Disable Longhorn manager to schedule replica on Kubernetes cordoned node. By default true."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.replicaZoneSoftAntiAffinity
|
|
||||||
label: Replica Zone Level Soft Anti-Affinity
|
|
||||||
description: "Allow scheduling new Replicas of Volume to the Nodes in the same Zone as existing healthy Replicas. Nodes don't belong to any Zone will be treated as in the same Zone. Notice that Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` in the Kubernetes node object to identify the zone. By default true."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.replicaDiskSoftAntiAffinity
|
|
||||||
label: Replica Disk Level Soft Anti-Affinity
|
|
||||||
description: 'Allow scheduling on disks with existing healthy replicas of the same volume. By default true.'
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.allowEmptyNodeSelectorVolume
|
|
||||||
label: Allow Empty Node Selector Volume
|
|
||||||
description: "Allow Scheduling Empty Node Selector Volumes To Any Node"
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.allowEmptyDiskSelectorVolume
|
|
||||||
label: Allow Empty Disk Selector Volume
|
|
||||||
description: "Allow Scheduling Empty Disk Selector Volumes To Any Disk"
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.nodeDownPodDeletionPolicy
|
|
||||||
label: Pod Deletion Policy When Node is Down
|
|
||||||
description: "Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "do-nothing"
|
|
||||||
- "delete-statefulset-pod"
|
|
||||||
- "delete-deployment-pod"
|
|
||||||
- "delete-both-statefulset-and-deployment-pod"
|
|
||||||
default: "do-nothing"
|
|
||||||
- variable: defaultSettings.nodeDrainPolicy
|
|
||||||
label: Node Drain Policy
|
|
||||||
description: "Define the policy to use when a node with the last healthy replica of a volume is drained."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "block-if-contains-last-replica"
|
|
||||||
- "allow-if-replica-is-stopped"
|
|
||||||
- "always-allow"
|
|
||||||
default: "block-if-contains-last-replica"
|
|
||||||
- variable: defaultSettings.replicaReplenishmentWaitInterval
|
|
||||||
label: Replica Replenishment Wait Interval
|
|
||||||
description: "In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica rather than directly creating a new replica for a degraded volume."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 600
|
|
||||||
- variable: defaultSettings.concurrentReplicaRebuildPerNodeLimit
|
|
||||||
label: Concurrent Replica Rebuild Per Node Limit
|
|
||||||
description: "This setting controls how many replicas on a node can be rebuilt simultaneously."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 5
|
|
||||||
- variable: defaultSettings.concurrentVolumeBackupRestorePerNodeLimit
|
|
||||||
label: Concurrent Volume Backup Restore Per Node Limit
|
|
||||||
description: "This setting controls how many volumes on a node can restore the backup concurrently. Set the value to **0** to disable backup restore."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 5
|
|
||||||
- variable: defaultSettings.disableRevisionCounter
|
|
||||||
label: Disable Revision Counter
|
|
||||||
description: "This setting is only for volumes created by UI. By default, this is false meaning there will be a reivision counter file to track every write to the volume. During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume. If revision counter is disabled, Longhorn will not track every write to the volume. During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and file size to pick the replica candidate to recover the whole volume."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "false"
|
|
||||||
- variable: defaultSettings.systemManagedPodsImagePullPolicy
|
|
||||||
label: System Managed Pod Image Pull Policy
|
|
||||||
description: "This setting defines the Image Pull Policy of Longhorn system managed pods, e.g. instance manager, engine image, CSI driver, etc. The new Image Pull Policy will only apply after the system managed pods restart."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "if-not-present"
|
|
||||||
- "always"
|
|
||||||
- "never"
|
|
||||||
default: "if-not-present"
|
|
||||||
- variable: defaultSettings.allowVolumeCreationWithDegradedAvailability
|
|
||||||
label: Allow Volume Creation with Degraded Availability
|
|
||||||
description: "This setting allows user to create and attach a volume that doesn't have all the replicas scheduled at the time of creation."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.autoCleanupSystemGeneratedSnapshot
|
|
||||||
label: Automatically Cleanup System Generated Snapshot
|
|
||||||
description: "This setting enables Longhorn to automatically cleanup the system generated snapshot after replica rebuild is done."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "true"
|
|
||||||
- variable: defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit
|
|
||||||
label: Concurrent Automatic Engine Upgrade Per Node Limit
|
|
||||||
description: "This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 0
|
|
||||||
- variable: defaultSettings.backingImageCleanupWaitInterval
|
|
||||||
label: Backing Image Cleanup Wait Interval
|
|
||||||
description: "This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 60
|
|
||||||
- variable: defaultSettings.backingImageRecoveryWaitInterval
|
|
||||||
label: Backing Image Recovery Wait Interval
|
|
||||||
description: "This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file when all disk files of this backing image become failed or unknown."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
default: 300
|
|
||||||
- variable: defaultSettings.guaranteedInstanceManagerCPU
|
|
||||||
label: Guaranteed Instance Manager CPU
|
|
||||||
description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod. You can leave it with the default value, which is 12%."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 0
|
|
||||||
max: 40
|
|
||||||
default: 12
|
|
||||||
- variable: defaultSettings.logLevel
|
|
||||||
label: Log Level
|
|
||||||
description: "The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. Default to Info."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: string
|
|
||||||
default: "Info"
|
|
||||||
- variable: defaultSettings.kubernetesClusterAutoscalerEnabled
|
|
||||||
label: Kubernetes Cluster Autoscaler Enabled (Experimental)
|
|
||||||
description: "Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: false
|
|
||||||
- variable: defaultSettings.orphanAutoDeletion
|
|
||||||
label: Orphaned Data Cleanup
|
|
||||||
description: "This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas. Orphan resources on down or unknown nodes will not be cleaned up automatically."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: false
|
|
||||||
- variable: defaultSettings.storageNetwork
|
|
||||||
label: Storage Network
|
|
||||||
description: "Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: string
|
|
||||||
default:
|
|
||||||
- variable: defaultSettings.deletingConfirmationFlag
|
|
||||||
label: Deleting Confirmation Flag
|
|
||||||
description: "This flag is designed to prevent Longhorn from being accidentally uninstalled which will lead to data lost."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "false"
|
|
||||||
- variable: defaultSettings.engineReplicaTimeout
|
|
||||||
label: Timeout between Engine and Replica
|
|
||||||
description: "In seconds. The setting specifies the timeout between the engine and replica(s), and the value should be between 8 to 30 seconds. The default value is 8 seconds."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
default: "8"
|
|
||||||
- variable: defaultSettings.snapshotDataIntegrity
|
|
||||||
label: Snapshot Data Integrity
|
|
||||||
description: "This setting allows users to enable or disable snapshot hashing and data integrity checking."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: string
|
|
||||||
default: "disabled"
|
|
||||||
- variable: defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation
|
|
||||||
label: Immediate Snapshot Data Integrity Check After Creating a Snapshot
|
|
||||||
description: "Hashing snapshot disk files impacts the performance of the system. The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "false"
|
|
||||||
- variable: defaultSettings.snapshotDataIntegrityCronjob
|
|
||||||
label: Snapshot Data Integrity Check CronJob
|
|
||||||
description: "Unix-cron string format. The setting specifies when Longhorn checks the data integrity of snapshot disk files."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: string
|
|
||||||
default: "0 0 */7 * *"
|
|
||||||
- variable: defaultSettings.removeSnapshotsDuringFilesystemTrim
|
|
||||||
label: Remove Snapshots During Filesystem Trim
|
|
||||||
description: "This setting allows Longhorn filesystem trim feature to automatically mark the latest snapshot and its ancestors as removed and stops at the snapshot containing multiple children."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: "false"
|
|
||||||
- variable: defaultSettings.fastReplicaRebuildEnabled
|
|
||||||
label: Fast Replica Rebuild Enabled
|
|
||||||
description: "This feature supports the fast replica rebuilding. It relies on the checksum of snapshot disk files, so setting the snapshot-data-integrity to **enable** or **fast-check** is a prerequisite."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: boolean
|
|
||||||
default: false
|
|
||||||
- variable: defaultSettings.replicaFileSyncHttpClientTimeout
|
|
||||||
label: Timeout of HTTP Client to Replica File Sync Server
|
|
||||||
description: "In seconds. The setting specifies the HTTP client timeout to the file sync server."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
default: "30"
|
|
||||||
- variable: defaultSettings.backupCompressionMethod
|
|
||||||
label: Backup Compression Method
|
|
||||||
description: "This setting allows users to specify backup compression method."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: string
|
|
||||||
default: "lz4"
|
|
||||||
- variable: defaultSettings.backupConcurrentLimit
|
|
||||||
label: Backup Concurrent Limit Per Backup
|
|
||||||
description: "This setting controls how many worker threads per backup concurrently."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 1
|
|
||||||
default: 2
|
|
||||||
- variable: defaultSettings.restoreConcurrentLimit
|
|
||||||
label: Restore Concurrent Limit Per Backup
|
|
||||||
description: "This setting controls how many worker threads per restore concurrently."
|
|
||||||
group: "Longhorn Default Settings"
|
|
||||||
type: int
|
|
||||||
min: 1
|
|
||||||
default: 2
|
|
||||||
- variable: defaultSettings.v2DataEngine
|
|
||||||
label: V2 Data Engine
|
|
||||||
description: "This allows users to activate v2 data engine based on SPDK. Currently, it is in the preview phase and should not be utilized in a production environment."
|
|
||||||
group: "Longhorn V2 Data Engine (Preview Feature) Settings"
|
|
||||||
type: boolean
|
|
||||||
default: false
|
|
||||||
- variable: defaultSettings.offlineReplicaRebuilding
|
|
||||||
label: Offline Replica Rebuilding
|
|
||||||
description: "This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine."
|
|
||||||
group: "Longhorn V2 Data Engine (Preview Feature) Settings"
|
|
||||||
required: true
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "enabled"
|
|
||||||
- "disabled"
|
|
||||||
default: "enabled"
|
|
||||||
- variable: persistence.defaultClass
|
|
||||||
default: "true"
|
|
||||||
description: "Set as default StorageClass for Longhorn"
|
|
||||||
label: Default Storage Class
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
required: true
|
|
||||||
type: boolean
|
|
||||||
- variable: persistence.reclaimPolicy
|
|
||||||
label: Storage Class Retain Policy
|
|
||||||
description: "Define reclaim policy. Options: `Retain`, `Delete`"
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
required: true
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "Delete"
|
|
||||||
- "Retain"
|
|
||||||
default: "Delete"
|
|
||||||
- variable: persistence.defaultClassReplicaCount
|
|
||||||
description: "Set replica count for Longhorn StorageClass"
|
|
||||||
label: Default Storage Class Replica Count
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
type: int
|
|
||||||
min: 1
|
|
||||||
max: 10
|
|
||||||
default: 3
|
|
||||||
- variable: persistence.defaultDataLocality
|
|
||||||
description: "Set data locality for Longhorn StorageClass. Options: `disabled`, `best-effort`"
|
|
||||||
label: Default Storage Class Data Locality
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "disabled"
|
|
||||||
- "best-effort"
|
|
||||||
default: "disabled"
|
|
||||||
- variable: persistence.recurringJobSelector.enable
|
|
||||||
description: "Enable recurring job selector for Longhorn StorageClass"
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
label: Enable Storage Class Recurring Job Selector
|
|
||||||
type: boolean
|
|
||||||
default: false
|
|
||||||
show_subquestion_if: true
|
|
||||||
subquestions:
|
|
||||||
- variable: persistence.recurringJobSelector.jobList
|
|
||||||
description: 'Recurring job selector list for Longhorn StorageClass. Please be careful of quotes of input. e.g., [{"name":"backup", "isGroup":true}]'
|
|
||||||
label: Storage Class Recurring Job Selector List
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
type: string
|
|
||||||
default:
|
|
||||||
- variable: persistence.defaultNodeSelector.enable
|
|
||||||
description: "Enable Node selector for Longhorn StorageClass"
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
label: Enable Storage Class Node Selector
|
|
||||||
type: boolean
|
|
||||||
default: false
|
|
||||||
show_subquestion_if: true
|
|
||||||
subquestions:
|
|
||||||
- variable: persistence.defaultNodeSelector.selector
|
|
||||||
label: Storage Class Node Selector
|
|
||||||
description: 'This selector enables only certain nodes having these tags to be used for the volume. e.g. `"storage,fast"`'
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
type: string
|
|
||||||
default:
|
|
||||||
- variable: persistence.backingImage.enable
|
|
||||||
description: "Set backing image for Longhorn StorageClass"
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
label: Default Storage Class Backing Image
|
|
||||||
type: boolean
|
|
||||||
default: false
|
|
||||||
show_subquestion_if: true
|
|
||||||
subquestions:
|
|
||||||
- variable: persistence.backingImage.name
|
|
||||||
description: 'Specify a backing image that will be used by Longhorn volumes in Longhorn StorageClass. If not exists, the backing image data source type and backing image data source parameters should be specified so that Longhorn will create the backing image before using it.'
|
|
||||||
label: Storage Class Backing Image Name
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
type: string
|
|
||||||
default:
|
|
||||||
- variable: persistence.backingImage.expectedChecksum
|
|
||||||
description: 'Specify the expected SHA512 checksum of the selected backing image in Longhorn StorageClass.
|
|
||||||
WARNING:
|
|
||||||
- If the backing image name is not specified, setting this field is meaningless.
|
|
||||||
- It is not recommended to set this field if the data source type is \"export-from-volume\".'
|
|
||||||
label: Storage Class Backing Image Expected SHA512 Checksum
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
type: string
|
|
||||||
default:
|
|
||||||
- variable: persistence.backingImage.dataSourceType
|
|
||||||
description: 'Specify the data source type for the backing image used in Longhorn StorageClass.
|
|
||||||
If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image.
|
|
||||||
WARNING:
|
|
||||||
- If the backing image name is not specified, setting this field is meaningless.
|
|
||||||
- As for backing image creation with data source type \"upload\", it is recommended to do it via UI rather than StorageClass here. Uploading requires file data sending to the Longhorn backend after the object creation, which is complicated if you want to handle it manually.'
|
|
||||||
label: Storage Class Backing Image Data Source Type
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- ""
|
|
||||||
- "download"
|
|
||||||
- "upload"
|
|
||||||
- "export-from-volume"
|
|
||||||
default: ""
|
|
||||||
- variable: persistence.backingImage.dataSourceParameters
|
|
||||||
description: "Specify the data source parameters for the backing image used in Longhorn StorageClass.
|
|
||||||
If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image.
|
|
||||||
This option accepts a json string of a map. e.g., '{\"url\":\"https://backing-image-example.s3-region.amazonaws.com/test-backing-image\"}'.
|
|
||||||
WARNING:
|
|
||||||
- If the backing image name is not specified, setting this field is meaningless.
|
|
||||||
- Be careful of the quotes here."
|
|
||||||
label: Storage Class Backing Image Data Source Parameters
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
type: string
|
|
||||||
default:
|
|
||||||
- variable: persistence.removeSnapshotsDuringFilesystemTrim
|
|
||||||
description: "Allow automatically removing snapshots during filesystem trim for Longhorn StorageClass. Options: `ignored`, `enabled`, `disabled`"
|
|
||||||
label: Default Storage Class Remove Snapshots During Filesystem Trim
|
|
||||||
group: "Longhorn Storage Class Settings"
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "ignored"
|
|
||||||
- "enabled"
|
|
||||||
- "disabled"
|
|
||||||
default: "ignored"
|
|
||||||
- variable: ingress.enabled
|
|
||||||
default: "false"
|
|
||||||
description: "Expose app using Layer 7 Load Balancer - ingress"
|
|
||||||
type: boolean
|
|
||||||
group: "Services and Load Balancing"
|
|
||||||
label: Expose app using Layer 7 Load Balancer
|
|
||||||
show_subquestion_if: true
|
|
||||||
subquestions:
|
|
||||||
- variable: ingress.host
|
|
||||||
default: "xip.io"
|
|
||||||
description: "layer 7 Load Balancer hostname"
|
|
||||||
type: hostname
|
|
||||||
required: true
|
|
||||||
label: Layer 7 Load Balancer Hostname
|
|
||||||
- variable: ingress.path
|
|
||||||
default: "/"
|
|
||||||
description: "If ingress is enabled you can set the default ingress path"
|
|
||||||
type: string
|
|
||||||
required: true
|
|
||||||
label: Ingress Path
|
|
||||||
- variable: service.ui.type
|
|
||||||
default: "Rancher-Proxy"
|
|
||||||
description: "Define Longhorn UI service type. Options: `ClusterIP`, `NodePort`, `LoadBalancer`, `Rancher-Proxy`"
|
|
||||||
type: enum
|
|
||||||
options:
|
|
||||||
- "ClusterIP"
|
|
||||||
- "NodePort"
|
|
||||||
- "LoadBalancer"
|
|
||||||
- "Rancher-Proxy"
|
|
||||||
label: Longhorn UI Service
|
|
||||||
show_if: "ingress.enabled=false"
|
|
||||||
group: "Services and Load Balancing"
|
|
||||||
show_subquestion_if: "NodePort"
|
|
||||||
subquestions:
|
|
||||||
- variable: service.ui.nodePort
|
|
||||||
default: ""
|
|
||||||
description: "NodePort port number(to set explicitly, choose port between 30000-32767)"
|
|
||||||
type: int
|
|
||||||
min: 30000
|
|
||||||
max: 32767
|
|
||||||
show_if: "service.ui.type=NodePort||service.ui.type=LoadBalancer"
|
|
||||||
label: UI Service NodePort number
|
|
||||||
- variable: enablePSP
|
|
||||||
default: "false"
|
|
||||||
description: "Setup a pod security policy for Longhorn workloads."
|
|
||||||
label: Pod Security Policy
|
|
||||||
type: boolean
|
|
||||||
group: "Other Settings"
|
|
||||||
- variable: global.cattle.windowsCluster.enabled
|
|
||||||
default: "false"
|
|
||||||
description: "Enable this to allow Longhorn to run on the Rancher deployed Windows cluster."
|
|
||||||
label: Rancher Windows Cluster
|
|
||||||
type: boolean
|
|
||||||
group: "Other Settings"
|
|
||||||
- variable: networkPolicies.enabled
|
|
||||||
description: "Enable NetworkPolicies to limit access to the longhorn pods.
|
|
||||||
Warning: The Rancher Proxy will not work if this feature is enabled and a custom NetworkPolicy must be added."
|
|
||||||
group: "Other Settings"
|
|
||||||
label: Network Policies
|
|
||||||
default: "false"
|
|
||||||
type: boolean
|
|
||||||
subquestions:
|
|
||||||
- variable: networkPolicies.type
|
|
||||||
label: Network Policies for Ingress
|
|
||||||
description: "Create the policy based on your distribution to allow access for the ingress. Options: `k3s`, `rke2`, `rke1`"
|
|
||||||
show_if: "networkPolicies.enabled=true&&ingress.enabled=true"
|
|
||||||
type: enum
|
|
||||||
default: "rke2"
|
|
||||||
options:
|
|
||||||
- "rke1"
|
|
||||||
- "rke2"
|
|
||||||
- "k3s"
|
|
182
chart/questions.yml
Normal file
182
chart/questions.yml
Normal file
@ -0,0 +1,182 @@
|
|||||||
|
categories:
|
||||||
|
- storage
|
||||||
|
labels:
|
||||||
|
io.rancher.certified: experimental
|
||||||
|
namespace: longhorn-system
|
||||||
|
questions:
|
||||||
|
- variable: csi.attacherImage
|
||||||
|
default:
|
||||||
|
description: "Specify CSI attacher image. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Attacher Image
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: csi.provisionerImage
|
||||||
|
default:
|
||||||
|
description: "Specify CSI provisioner image. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Provisioner Image
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: csi.driverRegistrarImage
|
||||||
|
default:
|
||||||
|
description: "Specify CSI Driver Registrar image. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Driver Registrar Image
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: csi.kubeletRootDir
|
||||||
|
default:
|
||||||
|
description: "Specify kubelet root-dir. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Kubelet Root Directory
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: csi.attacherReplicaCount
|
||||||
|
type: int
|
||||||
|
default:
|
||||||
|
min: 1
|
||||||
|
max: 10
|
||||||
|
description: "Specify replica count of CSI Attacher. By default 3."
|
||||||
|
label: Longhorn CSI Attacher replica count
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: csi.provisionerReplicaCount
|
||||||
|
type: int
|
||||||
|
default:
|
||||||
|
min: 1
|
||||||
|
max: 10
|
||||||
|
description: "Specify replica count of CSI Provisioner. By default 3."
|
||||||
|
label: Longhorn CSI Provisioner replica count
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: persistence.defaultClass
|
||||||
|
default: "true"
|
||||||
|
description: "Set as default StorageClass"
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
type: boolean
|
||||||
|
required: true
|
||||||
|
label: Default Storage Class
|
||||||
|
- variable: persistence.defaultClassReplicaCount
|
||||||
|
description: "Set replica count for default StorageClass"
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
type: int
|
||||||
|
default: 3
|
||||||
|
min: 1
|
||||||
|
max: 10
|
||||||
|
label: Default Storage Class Replica Count
|
||||||
|
|
||||||
|
- variable: defaultSettings.backupTarget
|
||||||
|
label: Backup Target
|
||||||
|
description: "The target used for backup. Support NFS or S3."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: defaultSettings.backupTargetCredentialSecret
|
||||||
|
label: Backup Target Credential Secret
|
||||||
|
description: "The Kubernetes secret associated with the backup target."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: defaultSettings.createDefaultDiskLabeledNodes
|
||||||
|
label: Create Default Disk on Labeled Nodes
|
||||||
|
description: 'Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other Disks exist. If disabled, default Disk will be created on all new Nodes (only on first add). By default false.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "false"
|
||||||
|
- variable: defaultSettings.defaultDataPath
|
||||||
|
label: Default Data Path
|
||||||
|
description: 'Default path to use for storing data on a host. By default "/var/lib/rancher/longhorn/"'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default: "/var/lib/rancher/longhorn/"
|
||||||
|
- variable: defaultSettings.replicaSoftAntiAffinity
|
||||||
|
label: Replica Soft Anti-Affinity
|
||||||
|
description: 'Allow scheduling on nodes with existing healthy replicas of the same volume. By default true.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.storageOverProvisioningPercentage
|
||||||
|
label: Storage Over Provisioning Percentage
|
||||||
|
description: "The over-provisioning percentage defines how much storage can be allocated relative to the hard drive's capacity. By default 500."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 500
|
||||||
|
- variable: defaultSettings.storageMinimalAvailablePercentage
|
||||||
|
label: Storage Minimal Available Percentage
|
||||||
|
description: "If one disk's available capacity to it's maximum capacity in % is less than the minimal available percentage, the disk would become unschedulable until more space freed up. By default 10."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
max: 100
|
||||||
|
default: 10
|
||||||
|
- variable: defaultSettings.upgradeChecker
|
||||||
|
label: Enable Upgrade Checker
|
||||||
|
description: 'Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, it will notify the user using UI. By default true.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.defaultReplicaCount
|
||||||
|
label: Default Replica Count
|
||||||
|
description: "The default number of replicas when creating the volume from Longhorn UI. For Kubernetes, update the `numberOfReplicas` in the StorageClass. By default 3."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 1
|
||||||
|
max: 20
|
||||||
|
default: 3
|
||||||
|
- variable: defaultSettings.guaranteedEngineCPU
|
||||||
|
label: Guaranteed Engine CPU
|
||||||
|
description: '(EXPERIMENTAL FEATURE) Allow Longhorn Engine to have guaranteed CPU allocation. The value is how many CPUs should be reserved for each Engine/Replica Manager Pod created by Longhorn. For example, 0.1 means one-tenth of a CPU. This will help maintain engine stability during high node workload. It only applies to the Engine/Replica Manager Pods created after the setting took effect. WARNING: Attaching of the volume may fail or stuck while using this feature due to the resource constraint. Disabled ("0") by default.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: float
|
||||||
|
default: 0
|
||||||
|
- variable: defaultSettings.defaultLonghornStaticStorageClass
|
||||||
|
label: Default Longhorn Static StorageClass Name
|
||||||
|
description: "The 'storageClassName' is for PV/PVC when creating PV/PVC for an existing Longhorn volume. Notice that it's unnecessary for users create the related StorageClass object in Kubernetes since the StorageClass would only be used as matching labels for PVC bounding purpose. By default 'longhorn-static'."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default: "longhorn-static"
|
||||||
|
- variable: defaultSettings.backupstorePollInterval
|
||||||
|
label: Backupstore Poll Interval
|
||||||
|
description: "In seconds. The interval to poll the backup store for updating volumes' Last Backup field. By default 300."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 300
|
||||||
|
- variable: defaultSettings.taintToleration
|
||||||
|
label: Kubernetes Taint Toleration
|
||||||
|
description: "By setting tolerations for Longhorn then adding taints for the nodes, the nodes with large storage can be dedicated to Longhorn only (to store replica data) and reject other general workloads. Multiple tolerations can be set here, and these tolerations are separated by semicolon. For example, \"key1=value1:NoSchedule; key2:NoExecute\". Notice that \"kubernetes.io\" is used as the key of all Kubernetes default tolerations, please do not contain this substring in your toleration setting."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default: ""
|
||||||
|
- variable: ingress.enabled
|
||||||
|
default: "false"
|
||||||
|
description: "Expose app using Layer 7 Load Balancer - ingress"
|
||||||
|
type: boolean
|
||||||
|
group: "Services and Load Balancing"
|
||||||
|
label: Expose app using Layer 7 Load Balancer
|
||||||
|
show_subquestion_if: true
|
||||||
|
subquestions:
|
||||||
|
- variable: ingress.host
|
||||||
|
default: "xip.io"
|
||||||
|
description: "layer 7 Load Balancer hostname"
|
||||||
|
type: hostname
|
||||||
|
required: true
|
||||||
|
label: Layer 7 Load Balancer Hostname
|
||||||
|
- variable: service.ui.type
|
||||||
|
default: "Rancher-Proxy"
|
||||||
|
description: "Define Longhorn UI service type"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "ClusterIP"
|
||||||
|
- "NodePort"
|
||||||
|
- "LoadBalancer"
|
||||||
|
- "Rancher-Proxy"
|
||||||
|
label: Longhorn UI Service
|
||||||
|
show_if: "ingress.enabled=false"
|
||||||
|
group: "Services and Load Balancing"
|
||||||
|
show_subquestion_if: "NodePort"
|
||||||
|
subquestions:
|
||||||
|
- variable: service.ui.nodePort
|
||||||
|
default: ""
|
||||||
|
description: "NodePort port number(to set explicitly, choose port between 30000-32767)"
|
||||||
|
type: int
|
||||||
|
min: 30000
|
||||||
|
max: 32767
|
||||||
|
show_if: "service.ui.type=NodePort||service.ui.type=LoadBalancer"
|
||||||
|
label: UI Service NodePort number
|
@ -1,5 +1,2 @@
|
|||||||
Longhorn is now installed on the cluster!
|
1. Get the application URL by running these commands:
|
||||||
|
kubectl get po -n $release_namespace
|
||||||
Please wait a few minutes for other Longhorn components such as CSI deployments, Engine Images, and Instance Managers to be initialized.
|
|
||||||
|
|
||||||
Visit our documentation at https://longhorn.io/docs/
|
|
||||||
|
@ -20,47 +20,3 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
|
|||||||
{{- $fullname := (include "longhorn.fullname" .) -}}
|
{{- $fullname := (include "longhorn.fullname" .) -}}
|
||||||
{{- printf "http://%s-backend:9500" $fullname | trunc 63 | trimSuffix "-" -}}
|
{{- printf "http://%s-backend:9500" $fullname | trunc 63 | trimSuffix "-" -}}
|
||||||
{{- end -}}
|
{{- end -}}
|
||||||
|
|
||||||
|
|
||||||
{{- define "secret" }}
|
|
||||||
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" .Values.privateRegistry.registryUrl (printf "%s:%s" .Values.privateRegistry.registryUser .Values.privateRegistry.registryPasswd | b64enc) | b64enc }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
{{- /*
|
|
||||||
longhorn.labels generates the standard Helm labels.
|
|
||||||
*/ -}}
|
|
||||||
{{- define "longhorn.labels" -}}
|
|
||||||
app.kubernetes.io/name: {{ template "longhorn.name" . }}
|
|
||||||
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
|
||||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
|
||||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
|
||||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
|
||||||
{{- end -}}
|
|
||||||
|
|
||||||
|
|
||||||
{{- define "system_default_registry" -}}
|
|
||||||
{{- if .Values.global.cattle.systemDefaultRegistry -}}
|
|
||||||
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
|
|
||||||
{{- else -}}
|
|
||||||
{{- "" -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- end -}}
|
|
||||||
|
|
||||||
{{- define "registry_url" -}}
|
|
||||||
{{- if .Values.privateRegistry.registryUrl -}}
|
|
||||||
{{- printf "%s/" .Values.privateRegistry.registryUrl -}}
|
|
||||||
{{- else -}}
|
|
||||||
{{ include "system_default_registry" . }}
|
|
||||||
{{- end -}}
|
|
||||||
{{- end -}}
|
|
||||||
|
|
||||||
{{- /*
|
|
||||||
define the longhorn release namespace
|
|
||||||
*/ -}}
|
|
||||||
{{- define "release_namespace" -}}
|
|
||||||
{{- if .Values.namespaceOverride -}}
|
|
||||||
{{- .Values.namespaceOverride -}}
|
|
||||||
{{- else -}}
|
|
||||||
{{- .Release.Namespace -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- end -}}
|
|
||||||
|
@ -2,7 +2,6 @@ apiVersion: rbac.authorization.k8s.io/v1
|
|||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-role
|
name: longhorn-role
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
rules:
|
rules:
|
||||||
- apiGroups:
|
- apiGroups:
|
||||||
- apiextensions.k8s.io
|
- apiextensions.k8s.io
|
||||||
@ -11,7 +10,7 @@ rules:
|
|||||||
verbs:
|
verbs:
|
||||||
- "*"
|
- "*"
|
||||||
- apiGroups: [""]
|
- apiGroups: [""]
|
||||||
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims","persistentvolumeclaims/status", "nodes", "proxy/nodes", "pods/log", "secrets", "services", "endpoints", "configmaps", "serviceaccounts"]
|
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes", "pods/log", "secrets", "services", "endpoints", "configmaps"]
|
||||||
verbs: ["*"]
|
verbs: ["*"]
|
||||||
- apiGroups: [""]
|
- apiGroups: [""]
|
||||||
resources: ["namespaces"]
|
resources: ["namespaces"]
|
||||||
@ -22,56 +21,20 @@ rules:
|
|||||||
- apiGroups: ["batch"]
|
- apiGroups: ["batch"]
|
||||||
resources: ["jobs", "cronjobs"]
|
resources: ["jobs", "cronjobs"]
|
||||||
verbs: ["*"]
|
verbs: ["*"]
|
||||||
- apiGroups: ["policy"]
|
|
||||||
resources: ["poddisruptionbudgets", "podsecuritypolicies"]
|
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["scheduling.k8s.io"]
|
|
||||||
resources: ["priorityclasses"]
|
|
||||||
verbs: ["watch", "list"]
|
|
||||||
- apiGroups: ["storage.k8s.io"]
|
- apiGroups: ["storage.k8s.io"]
|
||||||
resources: ["storageclasses", "volumeattachments", "volumeattachments/status", "csinodes", "csidrivers"]
|
resources: ["storageclasses", "volumeattachments", "csinodes", "csidrivers"]
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
|
||||||
resources: ["volumesnapshotclasses", "volumesnapshots", "volumesnapshotcontents", "volumesnapshotcontents/status"]
|
|
||||||
verbs: ["*"]
|
verbs: ["*"]
|
||||||
|
- apiGroups: ["coordination.k8s.io"]
|
||||||
|
resources: ["leases"]
|
||||||
|
verbs: ["get", "watch", "list", "delete", "update", "create"]
|
||||||
- apiGroups: ["longhorn.io"]
|
- apiGroups: ["longhorn.io"]
|
||||||
resources: ["volumes", "volumes/status", "engines", "engines/status", "replicas", "replicas/status", "settings",
|
resources: ["volumes", "volumes/status", "engines", "engines/status", "replicas", "replicas/status", "settings",
|
||||||
"engineimages", "engineimages/status", "nodes", "nodes/status", "instancemanagers", "instancemanagers/status",
|
"engineimages", "engineimages/status", "nodes", "nodes/status", "instancemanagers", "instancemanagers/status"]
|
||||||
{{- if .Values.openshift.enabled }}
|
|
||||||
"engineimages/finalizers", "nodes/finalizers", "instancemanagers/finalizers",
|
|
||||||
{{- end }}
|
|
||||||
"sharemanagers", "sharemanagers/status", "backingimages", "backingimages/status",
|
|
||||||
"backingimagemanagers", "backingimagemanagers/status", "backingimagedatasources", "backingimagedatasources/status",
|
|
||||||
"backuptargets", "backuptargets/status", "backupvolumes", "backupvolumes/status", "backups", "backups/status",
|
|
||||||
"recurringjobs", "recurringjobs/status", "orphans", "orphans/status", "snapshots", "snapshots/status",
|
|
||||||
"supportbundles", "supportbundles/status", "systembackups", "systembackups/status", "systemrestores", "systemrestores/status",
|
|
||||||
"volumeattachments", "volumeattachments/status"]
|
|
||||||
verbs: ["*"]
|
verbs: ["*"]
|
||||||
- apiGroups: ["coordination.k8s.io"]
|
- apiGroups: ["coordination.k8s.io"]
|
||||||
resources: ["leases"]
|
resources: ["leases"]
|
||||||
verbs: ["*"]
|
verbs: ["*"]
|
||||||
- apiGroups: ["metrics.k8s.io"]
|
# to be removed after v0.7.0
|
||||||
resources: ["pods", "nodes"]
|
- apiGroups: ["longhorn.rancher.io"]
|
||||||
verbs: ["get", "list"]
|
resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes", "instancemanagers"]
|
||||||
- apiGroups: ["apiregistration.k8s.io"]
|
|
||||||
resources: ["apiservices"]
|
|
||||||
verbs: ["list", "watch"]
|
|
||||||
- apiGroups: ["admissionregistration.k8s.io"]
|
|
||||||
resources: ["mutatingwebhookconfigurations", "validatingwebhookconfigurations"]
|
|
||||||
verbs: ["get", "list", "create", "patch", "delete"]
|
|
||||||
- apiGroups: ["rbac.authorization.k8s.io"]
|
|
||||||
resources: ["roles", "rolebindings", "clusterrolebindings", "clusterroles"]
|
|
||||||
verbs: ["*"]
|
verbs: ["*"]
|
||||||
{{- if .Values.openshift.enabled }}
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRole
|
|
||||||
metadata:
|
|
||||||
name: longhorn-ocp-privileged-role
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
rules:
|
|
||||||
- apiGroups: ["security.openshift.io"]
|
|
||||||
resources: ["securitycontextconstraints"]
|
|
||||||
resourceNames: ["anyuid", "privileged"]
|
|
||||||
verbs: ["use"]
|
|
||||||
{{- end }}
|
|
||||||
|
@ -2,7 +2,6 @@ apiVersion: rbac.authorization.k8s.io/v1
|
|||||||
kind: ClusterRoleBinding
|
kind: ClusterRoleBinding
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-bind
|
name: longhorn-bind
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
roleRef:
|
roleRef:
|
||||||
apiGroup: rbac.authorization.k8s.io
|
apiGroup: rbac.authorization.k8s.io
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
@ -10,40 +9,4 @@ roleRef:
|
|||||||
subjects:
|
subjects:
|
||||||
- kind: ServiceAccount
|
- kind: ServiceAccount
|
||||||
name: longhorn-service-account
|
name: longhorn-service-account
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
name: longhorn-support-bundle
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: ClusterRole
|
|
||||||
name: cluster-admin
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: longhorn-support-bundle
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
{{- if .Values.openshift.enabled }}
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
name: longhorn-ocp-privileged-bind
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: ClusterRole
|
|
||||||
name: longhorn-ocp-privileged-role
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: longhorn-service-account
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: longhorn-ui-service-account
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: default # supportbundle-agent-support-bundle uses default sa
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
{{- end }}
|
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -1,74 +1,52 @@
|
|||||||
apiVersion: apps/v1
|
apiVersion: apps/v1
|
||||||
kind: DaemonSet
|
kind: DaemonSet
|
||||||
metadata:
|
metadata:
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
labels:
|
||||||
app: longhorn-manager
|
app: longhorn-manager
|
||||||
name: longhorn-manager
|
name: longhorn-manager
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
spec:
|
spec:
|
||||||
selector:
|
selector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
app: longhorn-manager
|
app: longhorn-manager
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
labels:
|
||||||
app: longhorn-manager
|
app: longhorn-manager
|
||||||
{{- with .Values.annotations }}
|
|
||||||
annotations:
|
|
||||||
{{- toYaml . | nindent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: longhorn-manager
|
- name: longhorn-manager
|
||||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
|
||||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
imagePullPolicy: Always
|
||||||
securityContext:
|
securityContext:
|
||||||
privileged: true
|
privileged: true
|
||||||
command:
|
command:
|
||||||
- longhorn-manager
|
- longhorn-manager
|
||||||
- -d
|
- -d
|
||||||
{{- if eq .Values.longhornManager.log.format "json" }}
|
|
||||||
- -j
|
|
||||||
{{- end }}
|
|
||||||
- daemon
|
- daemon
|
||||||
- --engine-image
|
- --engine-image
|
||||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.engine.repository }}:{{ .Values.image.longhorn.engine.tag }}"
|
- "{{ .Values.image.longhorn.engine }}:{{ .Values.image.longhorn.engineTag }}"
|
||||||
- --instance-manager-image
|
- --instance-manager-image
|
||||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.instanceManager.repository }}:{{ .Values.image.longhorn.instanceManager.tag }}"
|
- "{{ .Values.image.longhorn.instanceManager }}:{{ .Values.image.longhorn.instanceManagerTag }}"
|
||||||
- --share-manager-image
|
|
||||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.shareManager.repository }}:{{ .Values.image.longhorn.shareManager.tag }}"
|
|
||||||
- --backing-image-manager-image
|
|
||||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.backingImageManager.repository }}:{{ .Values.image.longhorn.backingImageManager.tag }}"
|
|
||||||
- --support-bundle-manager-image
|
|
||||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.supportBundleKit.repository }}:{{ .Values.image.longhorn.supportBundleKit.tag }}"
|
|
||||||
- --manager-image
|
- --manager-image
|
||||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}"
|
- "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
|
||||||
- --service-account
|
- --service-account
|
||||||
- longhorn-service-account
|
- longhorn-service-account
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 9500
|
- containerPort: 9500
|
||||||
name: manager
|
name: manager
|
||||||
- containerPort: 9501
|
|
||||||
name: conversion-wh
|
|
||||||
- containerPort: 9502
|
|
||||||
name: admission-wh
|
|
||||||
- containerPort: 9503
|
|
||||||
name: recov-backend
|
|
||||||
readinessProbe:
|
|
||||||
httpGet:
|
|
||||||
path: /v1/healthz
|
|
||||||
port: 9501
|
|
||||||
scheme: HTTPS
|
|
||||||
volumeMounts:
|
volumeMounts:
|
||||||
- name: dev
|
- name: dev
|
||||||
mountPath: /host/dev/
|
mountPath: /host/dev/
|
||||||
- name: proc
|
- name: proc
|
||||||
mountPath: /host/proc/
|
mountPath: /host/proc/
|
||||||
|
- name: varrun
|
||||||
|
mountPath: /var/run/
|
||||||
- name: longhorn
|
- name: longhorn
|
||||||
mountPath: /var/lib/longhorn/
|
mountPath: /var/lib/longhorn/
|
||||||
mountPropagation: Bidirectional
|
mountPropagation: Bidirectional
|
||||||
- name: longhorn-grpc-tls
|
- name: longhorn-default-setting
|
||||||
mountPath: /tls-files/
|
mountPath: /var/lib/longhorn-setting/
|
||||||
env:
|
env:
|
||||||
- name: POD_NAMESPACE
|
- name: POD_NAMESPACE
|
||||||
valueFrom:
|
valueFrom:
|
||||||
@ -82,6 +60,10 @@ spec:
|
|||||||
valueFrom:
|
valueFrom:
|
||||||
fieldRef:
|
fieldRef:
|
||||||
fieldPath: spec.nodeName
|
fieldPath: spec.nodeName
|
||||||
|
- name: LONGHORN_BACKEND_SVC
|
||||||
|
value: longhorn-backend
|
||||||
|
- name: DEFAULT_SETTING_PATH
|
||||||
|
value: /var/lib/longhorn-setting/default-setting.yaml
|
||||||
volumes:
|
volumes:
|
||||||
- name: dev
|
- name: dev
|
||||||
hostPath:
|
hostPath:
|
||||||
@ -89,38 +71,15 @@ spec:
|
|||||||
- name: proc
|
- name: proc
|
||||||
hostPath:
|
hostPath:
|
||||||
path: /proc/
|
path: /proc/
|
||||||
|
- name: varrun
|
||||||
|
hostPath:
|
||||||
|
path: /var/run/
|
||||||
- name: longhorn
|
- name: longhorn
|
||||||
hostPath:
|
hostPath:
|
||||||
path: /var/lib/longhorn/
|
path: /var/lib/longhorn/
|
||||||
- name: longhorn-grpc-tls
|
- name: longhorn-default-setting
|
||||||
secret:
|
configMap:
|
||||||
secretName: longhorn-grpc-tls
|
name: longhorn-default-setting
|
||||||
optional: true
|
|
||||||
{{- if .Values.privateRegistry.registrySecret }}
|
|
||||||
imagePullSecrets:
|
|
||||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.priorityClass }}
|
|
||||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
tolerations:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.tolerations }}
|
|
||||||
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
nodeSelector:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.nodeSelector }}
|
|
||||||
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
serviceAccountName: longhorn-service-account
|
serviceAccountName: longhorn-service-account
|
||||||
updateStrategy:
|
updateStrategy:
|
||||||
rollingUpdate:
|
rollingUpdate:
|
||||||
@ -129,14 +88,10 @@ spec:
|
|||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Service
|
kind: Service
|
||||||
metadata:
|
metadata:
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
labels:
|
||||||
app: longhorn-manager
|
app: longhorn-manager
|
||||||
name: longhorn-backend
|
name: longhorn-backend
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
{{- if .Values.longhornManager.serviceAnnotations }}
|
|
||||||
annotations:
|
|
||||||
{{ toYaml .Values.longhornManager.serviceAnnotations | indent 4 }}
|
|
||||||
{{- end }}
|
|
||||||
spec:
|
spec:
|
||||||
type: {{ .Values.service.manager.type }}
|
type: {{ .Values.service.manager.type }}
|
||||||
sessionAffinity: ClientIP
|
sessionAffinity: ClientIP
|
||||||
|
@ -2,85 +2,19 @@ apiVersion: v1
|
|||||||
kind: ConfigMap
|
kind: ConfigMap
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-default-setting
|
name: longhorn-default-setting
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
data:
|
data:
|
||||||
default-setting.yaml: |-
|
default-setting.yaml: |-
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupTarget) }}backup-target: {{ .Values.defaultSettings.backupTarget }}{{ end }}
|
backup-target: {{ .Values.defaultSettings.backupTarget }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupTargetCredentialSecret) }}backup-target-credential-secret: {{ .Values.defaultSettings.backupTargetCredentialSecret }}{{ end }}
|
backup-target-credential-secret: {{ .Values.defaultSettings.backupTargetCredentialSecret }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.allowRecurringJobWhileVolumeDetached) }}allow-recurring-job-while-volume-detached: {{ .Values.defaultSettings.allowRecurringJobWhileVolumeDetached }}{{ end }}
|
create-default-disk-labeled-nodes: {{ .Values.defaultSettings.createDefaultDiskLabeledNodes }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.createDefaultDiskLabeledNodes) }}create-default-disk-labeled-nodes: {{ .Values.defaultSettings.createDefaultDiskLabeledNodes }}{{ end }}
|
default-data-path: {{ .Values.defaultSettings.defaultDataPath }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultDataPath) }}default-data-path: {{ .Values.defaultSettings.defaultDataPath }}{{ end }}
|
replica-soft-anti-affinity: {{ .Values.defaultSettings.replicaSoftAntiAffinity }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaSoftAntiAffinity) }}replica-soft-anti-affinity: {{ .Values.defaultSettings.replicaSoftAntiAffinity }}{{ end }}
|
storage-over-provisioning-percentage: {{ .Values.defaultSettings.storageOverProvisioningPercentage }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaAutoBalance) }}replica-auto-balance: {{ .Values.defaultSettings.replicaAutoBalance }}{{ end }}
|
storage-minimal-available-percentage: {{ .Values.defaultSettings.storageMinimalAvailablePercentage }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.storageOverProvisioningPercentage) }}storage-over-provisioning-percentage: {{ .Values.defaultSettings.storageOverProvisioningPercentage }}{{ end }}
|
upgrade-checker: {{ .Values.defaultSettings.upgradeChecker }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.storageMinimalAvailablePercentage) }}storage-minimal-available-percentage: {{ .Values.defaultSettings.storageMinimalAvailablePercentage }}{{ end }}
|
default-replica-count: {{ .Values.defaultSettings.defaultReplicaCount }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.storageReservedPercentageForDefaultDisk) }}storage-reserved-percentage-for-default-disk: {{ .Values.defaultSettings.storageReservedPercentageForDefaultDisk }}{{ end }}
|
guaranteed-engine-cpu: {{ .Values.defaultSettings.guaranteedEngineCPU }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.upgradeChecker) }}upgrade-checker: {{ .Values.defaultSettings.upgradeChecker }}{{ end }}
|
default-longhorn-static-storage-class: {{ .Values.defaultSettings.defaultLonghornStaticStorageClass }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultReplicaCount) }}default-replica-count: {{ .Values.defaultSettings.defaultReplicaCount }}{{ end }}
|
backupstore-poll-interval: {{ .Values.defaultSettings.backupstorePollInterval }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultDataLocality) }}default-data-locality: {{ .Values.defaultSettings.defaultDataLocality }}{{ end }}
|
taint-toleration: {{ .Values.defaultSettings.taintToleration }}
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultLonghornStaticStorageClass) }}default-longhorn-static-storage-class: {{ .Values.defaultSettings.defaultLonghornStaticStorageClass }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupstorePollInterval) }}backupstore-poll-interval: {{ .Values.defaultSettings.backupstorePollInterval }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.failedBackupTTL) }}failed-backup-ttl: {{ .Values.defaultSettings.failedBackupTTL }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.restoreVolumeRecurringJobs) }}restore-volume-recurring-jobs: {{ .Values.defaultSettings.restoreVolumeRecurringJobs }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.recurringSuccessfulJobsHistoryLimit) }}recurring-successful-jobs-history-limit: {{ .Values.defaultSettings.recurringSuccessfulJobsHistoryLimit }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.recurringFailedJobsHistoryLimit) }}recurring-failed-jobs-history-limit: {{ .Values.defaultSettings.recurringFailedJobsHistoryLimit }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.supportBundleFailedHistoryLimit) }}support-bundle-failed-history-limit: {{ .Values.defaultSettings.supportBundleFailedHistoryLimit }}{{ end }}
|
|
||||||
{{- if or (not (kindIs "invalid" .Values.defaultSettings.taintToleration)) (.Values.global.cattle.windowsCluster.enabled) }}
|
|
||||||
taint-toleration: {{ $windowsDefaultSettingTaintToleration := list }}{{ $defaultSettingTaintToleration := list -}}
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.defaultSetting.taintToleration -}}
|
|
||||||
{{- $windowsDefaultSettingTaintToleration = .Values.global.cattle.windowsCluster.defaultSetting.taintToleration -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- if not (kindIs "invalid" .Values.defaultSettings.taintToleration) -}}
|
|
||||||
{{- $defaultSettingTaintToleration = .Values.defaultSettings.taintToleration -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- $taintToleration := list $windowsDefaultSettingTaintToleration $defaultSettingTaintToleration }}{{ join ";" (compact $taintToleration) -}}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or (not (kindIs "invalid" .Values.defaultSettings.systemManagedComponentsNodeSelector)) (.Values.global.cattle.windowsCluster.enabled) }}
|
|
||||||
system-managed-components-node-selector: {{ $windowsDefaultSettingNodeSelector := list }}{{ $defaultSettingNodeSelector := list -}}
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.defaultSetting.systemManagedComponentsNodeSelector -}}
|
|
||||||
{{ $windowsDefaultSettingNodeSelector = .Values.global.cattle.windowsCluster.defaultSetting.systemManagedComponentsNodeSelector -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- if not (kindIs "invalid" .Values.defaultSettings.systemManagedComponentsNodeSelector) -}}
|
|
||||||
{{- $defaultSettingNodeSelector = .Values.defaultSettings.systemManagedComponentsNodeSelector -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- $nodeSelector := list $windowsDefaultSettingNodeSelector $defaultSettingNodeSelector }}{{ join ";" (compact $nodeSelector) -}}
|
|
||||||
{{- end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.priorityClass) }}priority-class: {{ .Values.defaultSettings.priorityClass }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.autoSalvage) }}auto-salvage: {{ .Values.defaultSettings.autoSalvage }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly) }}auto-delete-pod-when-volume-detached-unexpectedly: {{ .Values.defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.disableSchedulingOnCordonedNode) }}disable-scheduling-on-cordoned-node: {{ .Values.defaultSettings.disableSchedulingOnCordonedNode }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaZoneSoftAntiAffinity) }}replica-zone-soft-anti-affinity: {{ .Values.defaultSettings.replicaZoneSoftAntiAffinity }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaDiskSoftAntiAffinity) }}replica-disk-soft-anti-affinity: {{ .Values.defaultSettings.replicaDiskSoftAntiAffinity }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.nodeDownPodDeletionPolicy) }}node-down-pod-deletion-policy: {{ .Values.defaultSettings.nodeDownPodDeletionPolicy }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.nodeDrainPolicy) }}node-drain-policy: {{ .Values.defaultSettings.nodeDrainPolicy }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaReplenishmentWaitInterval) }}replica-replenishment-wait-interval: {{ .Values.defaultSettings.replicaReplenishmentWaitInterval }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentReplicaRebuildPerNodeLimit) }}concurrent-replica-rebuild-per-node-limit: {{ .Values.defaultSettings.concurrentReplicaRebuildPerNodeLimit }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentVolumeBackupRestorePerNodeLimit) }}concurrent-volume-backup-restore-per-node-limit: {{ .Values.defaultSettings.concurrentVolumeBackupRestorePerNodeLimit }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.disableRevisionCounter) }}disable-revision-counter: {{ .Values.defaultSettings.disableRevisionCounter }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.systemManagedPodsImagePullPolicy) }}system-managed-pods-image-pull-policy: {{ .Values.defaultSettings.systemManagedPodsImagePullPolicy }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.allowVolumeCreationWithDegradedAvailability) }}allow-volume-creation-with-degraded-availability: {{ .Values.defaultSettings.allowVolumeCreationWithDegradedAvailability }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.autoCleanupSystemGeneratedSnapshot) }}auto-cleanup-system-generated-snapshot: {{ .Values.defaultSettings.autoCleanupSystemGeneratedSnapshot }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit) }}concurrent-automatic-engine-upgrade-per-node-limit: {{ .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backingImageCleanupWaitInterval) }}backing-image-cleanup-wait-interval: {{ .Values.defaultSettings.backingImageCleanupWaitInterval }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backingImageRecoveryWaitInterval) }}backing-image-recovery-wait-interval: {{ .Values.defaultSettings.backingImageRecoveryWaitInterval }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.guaranteedInstanceManagerCPU) }}guaranteed-instance-manager-cpu: {{ .Values.defaultSettings.guaranteedInstanceManagerCPU }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.kubernetesClusterAutoscalerEnabled) }}kubernetes-cluster-autoscaler-enabled: {{ .Values.defaultSettings.kubernetesClusterAutoscalerEnabled }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.orphanAutoDeletion) }}orphan-auto-deletion: {{ .Values.defaultSettings.orphanAutoDeletion }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.storageNetwork) }}storage-network: {{ .Values.defaultSettings.storageNetwork }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.deletingConfirmationFlag) }}deleting-confirmation-flag: {{ .Values.defaultSettings.deletingConfirmationFlag }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.engineReplicaTimeout) }}engine-replica-timeout: {{ .Values.defaultSettings.engineReplicaTimeout }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrity) }}snapshot-data-integrity: {{ .Values.defaultSettings.snapshotDataIntegrity }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation) }}snapshot-data-integrity-immediate-check-after-snapshot-creation: {{ .Values.defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrityCronjob) }}snapshot-data-integrity-cronjob: {{ .Values.defaultSettings.snapshotDataIntegrityCronjob }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.removeSnapshotsDuringFilesystemTrim) }}remove-snapshots-during-filesystem-trim: {{ .Values.defaultSettings.removeSnapshotsDuringFilesystemTrim }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.fastReplicaRebuildEnabled) }}fast-replica-rebuild-enabled: {{ .Values.defaultSettings.fastReplicaRebuildEnabled }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaFileSyncHttpClientTimeout) }}replica-file-sync-http-client-timeout: {{ .Values.defaultSettings.replicaFileSyncHttpClientTimeout }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.logLevel) }}log-level: {{ .Values.defaultSettings.logLevel }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupCompressionMethod) }}backup-compression-method: {{ .Values.defaultSettings.backupCompressionMethod }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupConcurrentLimit) }}backup-concurrent-limit: {{ .Values.defaultSettings.backupConcurrentLimit }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.restoreConcurrentLimit) }}restore-concurrent-limit: {{ .Values.defaultSettings.restoreConcurrentLimit }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.v2DataEngine) }}v2-data-engine: {{ .Values.defaultSettings.v2DataEngine }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.offlineReplicaRebuilding) }}offline-replica-rebuilding: {{ .Values.defaultSettings.offlineReplicaRebuilding }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.allowEmptyNodeSelectorVolume) }}allow-empty-node-selector-volume: {{ .Values.defaultSettings.allowEmptyNodeSelectorVolume }}{{ end }}
|
|
||||||
{{ if not (kindIs "invalid" .Values.defaultSettings.allowEmptyDiskSelectorVolume) }}allow-empty-disk-selector-volume: {{ .Values.defaultSettings.allowEmptyDiskSelectorVolume }}{{ end }}
|
|
||||||
|
@ -2,8 +2,7 @@ apiVersion: apps/v1
|
|||||||
kind: Deployment
|
kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-driver-deployer
|
name: longhorn-driver-deployer
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
@ -11,23 +10,23 @@ spec:
|
|||||||
app: longhorn-driver-deployer
|
app: longhorn-driver-deployer
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
labels:
|
||||||
app: longhorn-driver-deployer
|
app: longhorn-driver-deployer
|
||||||
spec:
|
spec:
|
||||||
initContainers:
|
initContainers:
|
||||||
- name: wait-longhorn-manager
|
- name: wait-longhorn-manager
|
||||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
|
||||||
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
|
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
|
||||||
containers:
|
containers:
|
||||||
- name: longhorn-driver-deployer
|
- name: longhorn-driver-deployer
|
||||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
|
||||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
imagePullPolicy: Always
|
||||||
command:
|
command:
|
||||||
- longhorn-manager
|
- longhorn-manager
|
||||||
- -d
|
- -d
|
||||||
- deploy-driver
|
- deploy-driver
|
||||||
- --manager-image
|
- --manager-image
|
||||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}"
|
- "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
|
||||||
- --manager-url
|
- --manager-url
|
||||||
- http://longhorn-backend:9500/v1
|
- http://longhorn-backend:9500/v1
|
||||||
env:
|
env:
|
||||||
@ -47,72 +46,24 @@ spec:
|
|||||||
- name: KUBELET_ROOT_DIR
|
- name: KUBELET_ROOT_DIR
|
||||||
value: {{ .Values.csi.kubeletRootDir }}
|
value: {{ .Values.csi.kubeletRootDir }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- if and .Values.image.csi.attacher.repository .Values.image.csi.attacher.tag }}
|
{{- if .Values.csi.attacherImage }}
|
||||||
- name: CSI_ATTACHER_IMAGE
|
- name: CSI_ATTACHER_IMAGE
|
||||||
value: "{{ template "registry_url" . }}{{ .Values.image.csi.attacher.repository }}:{{ .Values.image.csi.attacher.tag }}"
|
value: {{ .Values.csi.attacherImage }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- if and .Values.image.csi.provisioner.repository .Values.image.csi.provisioner.tag }}
|
{{- if .Values.csi.provisionerImage }}
|
||||||
- name: CSI_PROVISIONER_IMAGE
|
- name: CSI_PROVISIONER_IMAGE
|
||||||
value: "{{ template "registry_url" . }}{{ .Values.image.csi.provisioner.repository }}:{{ .Values.image.csi.provisioner.tag }}"
|
value: {{ .Values.csi.provisionerImage }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- if and .Values.image.csi.nodeDriverRegistrar.repository .Values.image.csi.nodeDriverRegistrar.tag }}
|
{{- if .Values.csi.driverRegistrarImage }}
|
||||||
- name: CSI_NODE_DRIVER_REGISTRAR_IMAGE
|
- name: CSI_DRIVER_REGISTRAR_IMAGE
|
||||||
value: "{{ template "registry_url" . }}{{ .Values.image.csi.nodeDriverRegistrar.repository }}:{{ .Values.image.csi.nodeDriverRegistrar.tag }}"
|
value: {{ .Values.csi.driverRegistrarImage }}
|
||||||
{{- end }}
|
|
||||||
{{- if and .Values.image.csi.resizer.repository .Values.image.csi.resizer.tag }}
|
|
||||||
- name: CSI_RESIZER_IMAGE
|
|
||||||
value: "{{ template "registry_url" . }}{{ .Values.image.csi.resizer.repository }}:{{ .Values.image.csi.resizer.tag }}"
|
|
||||||
{{- end }}
|
|
||||||
{{- if and .Values.image.csi.snapshotter.repository .Values.image.csi.snapshotter.tag }}
|
|
||||||
- name: CSI_SNAPSHOTTER_IMAGE
|
|
||||||
value: "{{ template "registry_url" . }}{{ .Values.image.csi.snapshotter.repository }}:{{ .Values.image.csi.snapshotter.tag }}"
|
|
||||||
{{- end }}
|
|
||||||
{{- if and .Values.image.csi.livenessProbe.repository .Values.image.csi.livenessProbe.tag }}
|
|
||||||
- name: CSI_LIVENESS_PROBE_IMAGE
|
|
||||||
value: "{{ template "registry_url" . }}{{ .Values.image.csi.livenessProbe.repository }}:{{ .Values.image.csi.livenessProbe.tag }}"
|
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- if .Values.csi.attacherReplicaCount }}
|
{{- if .Values.csi.attacherReplicaCount }}
|
||||||
- name: CSI_ATTACHER_REPLICA_COUNT
|
- name: CSI_ATTACHER_REPLICA_COUNT
|
||||||
value: {{ .Values.csi.attacherReplicaCount | quote }}
|
value: "{{ .Values.csi.attacherReplicaCount }}"
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- if .Values.csi.provisionerReplicaCount }}
|
{{- if .Values.csi.provisionerReplicaCount }}
|
||||||
- name: CSI_PROVISIONER_REPLICA_COUNT
|
- name: CSI_PROVISIONER_REPLICA_COUNT
|
||||||
value: {{ .Values.csi.provisionerReplicaCount | quote }}
|
value: "{{ .Values.csi.provisionerReplicaCount }}"
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.csi.resizerReplicaCount }}
|
|
||||||
- name: CSI_RESIZER_REPLICA_COUNT
|
|
||||||
value: {{ .Values.csi.resizerReplicaCount | quote }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.csi.snapshotterReplicaCount }}
|
|
||||||
- name: CSI_SNAPSHOTTER_REPLICA_COUNT
|
|
||||||
value: {{ .Values.csi.snapshotterReplicaCount | quote }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
{{- if .Values.privateRegistry.registrySecret }}
|
|
||||||
imagePullSecrets:
|
|
||||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornDriver.priorityClass }}
|
|
||||||
priorityClassName: {{ .Values.longhornDriver.priorityClass | quote }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornDriver.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
tolerations:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornDriver.tolerations }}
|
|
||||||
{{ toYaml .Values.longhornDriver.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornDriver.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
nodeSelector:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornDriver.nodeSelector }}
|
|
||||||
{{ toYaml .Values.longhornDriver.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
{{- end }}
|
||||||
serviceAccountName: longhorn-service-account
|
serviceAccountName: longhorn-service-account
|
||||||
securityContext:
|
|
||||||
runAsUser: 0
|
|
||||||
|
@ -1,174 +1,46 @@
|
|||||||
{{- if .Values.openshift.enabled }}
|
|
||||||
{{- if .Values.openshift.ui.route }}
|
|
||||||
# https://github.com/openshift/oauth-proxy/blob/master/contrib/sidecar.yaml
|
|
||||||
# Create a proxy service account and ensure it will use the route "proxy"
|
|
||||||
# Create a secure connection to the proxy via a route
|
|
||||||
apiVersion: route.openshift.io/v1
|
|
||||||
kind: Route
|
|
||||||
metadata:
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
app: longhorn-ui
|
|
||||||
name: {{ .Values.openshift.ui.route }}
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
spec:
|
|
||||||
to:
|
|
||||||
kind: Service
|
|
||||||
name: longhorn-ui
|
|
||||||
tls:
|
|
||||||
termination: reencrypt
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
app: longhorn-ui
|
|
||||||
name: longhorn-ui
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
annotations:
|
|
||||||
service.alpha.openshift.io/serving-cert-secret-name: longhorn-ui-tls
|
|
||||||
spec:
|
|
||||||
ports:
|
|
||||||
- name: longhorn-ui
|
|
||||||
port: {{ .Values.openshift.ui.port | default 443 }}
|
|
||||||
targetPort: {{ .Values.openshift.ui.proxy | default 8443 }}
|
|
||||||
selector:
|
|
||||||
app: longhorn-ui
|
|
||||||
---
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
apiVersion: apps/v1
|
apiVersion: apps/v1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
labels:
|
||||||
app: longhorn-ui
|
app: longhorn-ui
|
||||||
name: longhorn-ui
|
name: longhorn-ui
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
spec:
|
spec:
|
||||||
replicas: {{ .Values.longhornUI.replicas }}
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
app: longhorn-ui
|
app: longhorn-ui
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
labels:
|
||||||
app: longhorn-ui
|
app: longhorn-ui
|
||||||
spec:
|
spec:
|
||||||
serviceAccountName: longhorn-ui-service-account
|
|
||||||
affinity:
|
|
||||||
podAntiAffinity:
|
|
||||||
preferredDuringSchedulingIgnoredDuringExecution:
|
|
||||||
- weight: 1
|
|
||||||
podAffinityTerm:
|
|
||||||
labelSelector:
|
|
||||||
matchExpressions:
|
|
||||||
- key: app
|
|
||||||
operator: In
|
|
||||||
values:
|
|
||||||
- longhorn-ui
|
|
||||||
topologyKey: kubernetes.io/hostname
|
|
||||||
containers:
|
containers:
|
||||||
{{- if .Values.openshift.enabled }}
|
|
||||||
{{- if .Values.openshift.ui.route }}
|
|
||||||
- name: oauth-proxy
|
|
||||||
image: {{ template "registry_url" . }}{{ .Values.image.openshift.oauthProxy.repository }}:{{ .Values.image.openshift.oauthProxy.tag }}
|
|
||||||
imagePullPolicy: IfNotPresent
|
|
||||||
ports:
|
|
||||||
- containerPort: {{ .Values.openshift.ui.proxy | default 8443 }}
|
|
||||||
name: public
|
|
||||||
args:
|
|
||||||
- --https-address=:{{ .Values.openshift.ui.proxy | default 8443 }}
|
|
||||||
- --provider=openshift
|
|
||||||
- --openshift-service-account=longhorn-ui-service-account
|
|
||||||
- --upstream=http://localhost:8000
|
|
||||||
- --tls-cert=/etc/tls/private/tls.crt
|
|
||||||
- --tls-key=/etc/tls/private/tls.key
|
|
||||||
- --cookie-secret=SECRET
|
|
||||||
- --openshift-sar={"namespace":"{{ include "release_namespace" . }}","group":"longhorn.io","resource":"setting","verb":"delete"}
|
|
||||||
volumeMounts:
|
|
||||||
- mountPath: /etc/tls/private
|
|
||||||
name: longhorn-ui-tls
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
- name: longhorn-ui
|
- name: longhorn-ui
|
||||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.ui.repository }}:{{ .Values.image.longhorn.ui.tag }}
|
image: "{{ .Values.image.longhorn.ui }}:{{ .Values.image.longhorn.uiTag }}"
|
||||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
|
||||||
volumeMounts:
|
|
||||||
- name : nginx-cache
|
|
||||||
mountPath: /var/cache/nginx/
|
|
||||||
- name : nginx-config
|
|
||||||
mountPath: /var/config/nginx/
|
|
||||||
- name: var-run
|
|
||||||
mountPath: /var/run/
|
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 8000
|
- containerPort: 8000
|
||||||
name: http
|
name: http
|
||||||
env:
|
env:
|
||||||
- name: LONGHORN_MANAGER_IP
|
- name: LONGHORN_MANAGER_IP
|
||||||
value: "http://longhorn-backend:9500"
|
value: "http://longhorn-backend:9500"
|
||||||
- name: LONGHORN_UI_PORT
|
|
||||||
value: "8000"
|
|
||||||
volumes:
|
|
||||||
{{- if .Values.openshift.enabled }}
|
|
||||||
{{- if .Values.openshift.ui.route }}
|
|
||||||
- name: longhorn-ui-tls
|
|
||||||
secret:
|
|
||||||
secretName: longhorn-ui-tls
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
- emptyDir: {}
|
|
||||||
name: nginx-cache
|
|
||||||
- emptyDir: {}
|
|
||||||
name: nginx-config
|
|
||||||
- emptyDir: {}
|
|
||||||
name: var-run
|
|
||||||
{{- if .Values.privateRegistry.registrySecret }}
|
|
||||||
imagePullSecrets:
|
|
||||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornUI.priorityClass }}
|
|
||||||
priorityClassName: {{ .Values.longhornUI.priorityClass | quote }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornUI.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
tolerations:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornUI.tolerations }}
|
|
||||||
{{ toYaml .Values.longhornUI.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornUI.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
nodeSelector:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornUI.nodeSelector }}
|
|
||||||
{{ toYaml .Values.longhornUI.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
---
|
---
|
||||||
kind: Service
|
kind: Service
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
labels:
|
||||||
app: longhorn-ui
|
app: longhorn-ui
|
||||||
{{- if eq .Values.service.ui.type "Rancher-Proxy" }}
|
{{- if eq .Values.service.ui.type "Rancher-Proxy" }}
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
{{- end }}
|
{{- end }}
|
||||||
name: longhorn-frontend
|
name: longhorn-frontend
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
spec:
|
spec:
|
||||||
{{- if eq .Values.service.ui.type "Rancher-Proxy" }}
|
{{- if eq .Values.service.ui.type "Rancher-Proxy" }}
|
||||||
type: ClusterIP
|
type: ClusterIP
|
||||||
{{- else }}
|
{{- else }}
|
||||||
type: {{ .Values.service.ui.type }}
|
type: {{ .Values.service.ui.type }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- if and .Values.service.ui.loadBalancerIP (eq .Values.service.ui.type "LoadBalancer") }}
|
|
||||||
loadBalancerIP: {{ .Values.service.ui.loadBalancerIP }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if and (eq .Values.service.ui.type "LoadBalancer") .Values.service.ui.loadBalancerSourceRanges }}
|
|
||||||
loadBalancerSourceRanges: {{- toYaml .Values.service.ui.loadBalancerSourceRanges | nindent 4 }}
|
|
||||||
{{- end }}
|
|
||||||
selector:
|
selector:
|
||||||
app: longhorn-ui
|
app: longhorn-ui
|
||||||
ports:
|
ports:
|
||||||
|
@ -1,44 +1,26 @@
|
|||||||
{{- if .Values.ingress.enabled }}
|
{{- if .Values.ingress.enabled }}
|
||||||
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
|
apiVersion: extensions/v1beta1
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
{{- else -}}
|
|
||||||
apiVersion: networking.k8s.io/v1beta1
|
|
||||||
{{- end }}
|
|
||||||
kind: Ingress
|
kind: Ingress
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-ingress
|
name: longhorn-ingress
|
||||||
namespace: {{ include "release_namespace" . }}
|
labels:
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
app: longhorn-ingress
|
app: longhorn-ingress
|
||||||
annotations:
|
annotations:
|
||||||
{{- if .Values.ingress.secureBackends }}
|
{{- if .Values.ingress.tls }}
|
||||||
ingress.kubernetes.io/secure-backends: "true"
|
ingress.kubernetes.io/secure-backends: "true"
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- range $key, $value := .Values.ingress.annotations }}
|
{{- range $key, $value := .Values.ingress.annotations }}
|
||||||
{{ $key }}: {{ $value | quote }}
|
{{ $key }}: {{ $value | quote }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
spec:
|
spec:
|
||||||
{{- if and .Values.ingress.ingressClassName (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
|
|
||||||
ingressClassName: {{ .Values.ingress.ingressClassName }}
|
|
||||||
{{- end }}
|
|
||||||
rules:
|
rules:
|
||||||
- host: {{ .Values.ingress.host }}
|
- host: {{ .Values.ingress.host }}
|
||||||
http:
|
http:
|
||||||
paths:
|
paths:
|
||||||
- path: {{ default "" .Values.ingress.path }}
|
- path: {{ default "" .Values.ingress.path }}
|
||||||
{{- if (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
|
|
||||||
pathType: ImplementationSpecific
|
|
||||||
{{- end }}
|
|
||||||
backend:
|
backend:
|
||||||
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
|
|
||||||
service:
|
|
||||||
name: longhorn-frontend
|
|
||||||
port:
|
|
||||||
number: 80
|
|
||||||
{{- else }}
|
|
||||||
serviceName: longhorn-frontend
|
serviceName: longhorn-frontend
|
||||||
servicePort: 80
|
servicePort: 80
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.ingress.tls }}
|
{{- if .Values.ingress.tls }}
|
||||||
tls:
|
tls:
|
||||||
- hosts:
|
- hosts:
|
||||||
|
@ -1,27 +0,0 @@
|
|||||||
{{- if .Values.networkPolicies.enabled }}
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: backing-image-data-source
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: backing-image-data-source
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
ingress:
|
|
||||||
- from:
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: instance-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: backing-image-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: backing-image-data-source
|
|
||||||
{{- end }}
|
|
@ -1,27 +0,0 @@
|
|||||||
{{- if .Values.networkPolicies.enabled }}
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: backing-image-manager
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: backing-image-manager
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
ingress:
|
|
||||||
- from:
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: instance-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: backing-image-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: backing-image-data-source
|
|
||||||
{{- end }}
|
|
@ -1,27 +0,0 @@
|
|||||||
{{- if .Values.networkPolicies.enabled }}
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: instance-manager
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: instance-manager
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
ingress:
|
|
||||||
- from:
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: instance-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: backing-image-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/component: backing-image-data-source
|
|
||||||
{{- end }}
|
|
@ -1,35 +0,0 @@
|
|||||||
{{- if .Values.networkPolicies.enabled }}
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: longhorn-manager
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-manager
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
ingress:
|
|
||||||
- from:
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-manager
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-ui
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-csi-plugin
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
longhorn.io/managed-by: longhorn-manager
|
|
||||||
matchExpressions:
|
|
||||||
- { key: recurring-job.longhorn.io, operator: Exists }
|
|
||||||
- podSelector:
|
|
||||||
matchExpressions:
|
|
||||||
- { key: longhorn.io/job-task, operator: Exists }
|
|
||||||
- podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-driver-deployer
|
|
||||||
{{- end }}
|
|
@ -1,17 +0,0 @@
|
|||||||
{{- if .Values.networkPolicies.enabled }}
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: longhorn-recovery-backend
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-manager
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
ingress:
|
|
||||||
- ports:
|
|
||||||
- protocol: TCP
|
|
||||||
port: 9503
|
|
||||||
{{- end }}
|
|
@ -1,46 +0,0 @@
|
|||||||
{{- if and .Values.networkPolicies.enabled .Values.ingress.enabled (not (eq .Values.networkPolicies.type "")) }}
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: longhorn-ui-frontend
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-ui
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
ingress:
|
|
||||||
- from:
|
|
||||||
{{- if eq .Values.networkPolicies.type "rke1"}}
|
|
||||||
- namespaceSelector:
|
|
||||||
matchLabels:
|
|
||||||
kubernetes.io/metadata.name: ingress-nginx
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app.kubernetes.io/component: controller
|
|
||||||
app.kubernetes.io/instance: ingress-nginx
|
|
||||||
app.kubernetes.io/name: ingress-nginx
|
|
||||||
{{- else if eq .Values.networkPolicies.type "rke2" }}
|
|
||||||
- namespaceSelector:
|
|
||||||
matchLabels:
|
|
||||||
kubernetes.io/metadata.name: kube-system
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app.kubernetes.io/component: controller
|
|
||||||
app.kubernetes.io/instance: rke2-ingress-nginx
|
|
||||||
app.kubernetes.io/name: rke2-ingress-nginx
|
|
||||||
{{- else if eq .Values.networkPolicies.type "k3s" }}
|
|
||||||
- namespaceSelector:
|
|
||||||
matchLabels:
|
|
||||||
kubernetes.io/metadata.name: kube-system
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app.kubernetes.io/name: traefik
|
|
||||||
ports:
|
|
||||||
- port: 8000
|
|
||||||
protocol: TCP
|
|
||||||
- port: 80
|
|
||||||
protocol: TCP
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
@ -1,33 +0,0 @@
|
|||||||
{{- if .Values.networkPolicies.enabled }}
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: longhorn-conversion-webhook
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-manager
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
ingress:
|
|
||||||
- ports:
|
|
||||||
- protocol: TCP
|
|
||||||
port: 9501
|
|
||||||
---
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: NetworkPolicy
|
|
||||||
metadata:
|
|
||||||
name: longhorn-admission-webhook
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
podSelector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-manager
|
|
||||||
policyTypes:
|
|
||||||
- Ingress
|
|
||||||
ingress:
|
|
||||||
- ports:
|
|
||||||
- protocol: TCP
|
|
||||||
port: 9502
|
|
||||||
{{- end }}
|
|
@ -3,22 +3,20 @@ kind: Job
|
|||||||
metadata:
|
metadata:
|
||||||
annotations:
|
annotations:
|
||||||
"helm.sh/hook": post-upgrade
|
"helm.sh/hook": post-upgrade
|
||||||
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
|
"helm.sh/hook-delete-policy": hook-succeeded
|
||||||
name: longhorn-post-upgrade
|
name: longhorn-post-upgrade
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
spec:
|
spec:
|
||||||
activeDeadlineSeconds: 900
|
activeDeadlineSeconds: 900
|
||||||
backoffLimit: 1
|
backoffLimit: 1
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-post-upgrade
|
name: longhorn-post-upgrade
|
||||||
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: longhorn-post-upgrade
|
- name: longhorn-post-upgrade
|
||||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
|
||||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
imagePullPolicy: Always
|
||||||
command:
|
command:
|
||||||
- longhorn-manager
|
- longhorn-manager
|
||||||
- post-upgrade
|
- post-upgrade
|
||||||
@ -28,29 +26,4 @@ spec:
|
|||||||
fieldRef:
|
fieldRef:
|
||||||
fieldPath: metadata.namespace
|
fieldPath: metadata.namespace
|
||||||
restartPolicy: OnFailure
|
restartPolicy: OnFailure
|
||||||
{{- if .Values.privateRegistry.registrySecret }}
|
|
||||||
imagePullSecrets:
|
|
||||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.priorityClass }}
|
|
||||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
|
||||||
{{- end }}
|
|
||||||
serviceAccountName: longhorn-service-account
|
serviceAccountName: longhorn-service-account
|
||||||
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
tolerations:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.tolerations }}
|
|
||||||
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
nodeSelector:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.nodeSelector }}
|
|
||||||
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
@ -1,58 +0,0 @@
|
|||||||
{{- if .Values.helmPreUpgradeCheckerJob.enabled }}
|
|
||||||
apiVersion: batch/v1
|
|
||||||
kind: Job
|
|
||||||
metadata:
|
|
||||||
annotations:
|
|
||||||
"helm.sh/hook": pre-upgrade
|
|
||||||
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation,hook-failed
|
|
||||||
name: longhorn-pre-upgrade
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
spec:
|
|
||||||
activeDeadlineSeconds: 900
|
|
||||||
backoffLimit: 1
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
name: longhorn-pre-upgrade
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: longhorn-pre-upgrade
|
|
||||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
|
||||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
|
||||||
command:
|
|
||||||
- longhorn-manager
|
|
||||||
- pre-upgrade
|
|
||||||
env:
|
|
||||||
- name: POD_NAMESPACE
|
|
||||||
valueFrom:
|
|
||||||
fieldRef:
|
|
||||||
fieldPath: metadata.namespace
|
|
||||||
restartPolicy: OnFailure
|
|
||||||
{{- if .Values.privateRegistry.registrySecret }}
|
|
||||||
imagePullSecrets:
|
|
||||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.priorityClass }}
|
|
||||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
|
||||||
{{- end }}
|
|
||||||
serviceAccountName: longhorn-service-account
|
|
||||||
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
tolerations:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.tolerations }}
|
|
||||||
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
nodeSelector:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.nodeSelector }}
|
|
||||||
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
@ -1,66 +0,0 @@
|
|||||||
{{- if .Values.enablePSP }}
|
|
||||||
apiVersion: policy/v1beta1
|
|
||||||
kind: PodSecurityPolicy
|
|
||||||
metadata:
|
|
||||||
name: longhorn-psp
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
spec:
|
|
||||||
privileged: true
|
|
||||||
allowPrivilegeEscalation: true
|
|
||||||
requiredDropCapabilities:
|
|
||||||
- NET_RAW
|
|
||||||
allowedCapabilities:
|
|
||||||
- SYS_ADMIN
|
|
||||||
hostNetwork: false
|
|
||||||
hostIPC: false
|
|
||||||
hostPID: true
|
|
||||||
runAsUser:
|
|
||||||
rule: RunAsAny
|
|
||||||
seLinux:
|
|
||||||
rule: RunAsAny
|
|
||||||
fsGroup:
|
|
||||||
rule: RunAsAny
|
|
||||||
supplementalGroups:
|
|
||||||
rule: RunAsAny
|
|
||||||
volumes:
|
|
||||||
- configMap
|
|
||||||
- downwardAPI
|
|
||||||
- emptyDir
|
|
||||||
- secret
|
|
||||||
- projected
|
|
||||||
- hostPath
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: Role
|
|
||||||
metadata:
|
|
||||||
name: longhorn-psp-role
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- policy
|
|
||||||
resources:
|
|
||||||
- podsecuritypolicies
|
|
||||||
verbs:
|
|
||||||
- use
|
|
||||||
resourceNames:
|
|
||||||
- longhorn-psp
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: RoleBinding
|
|
||||||
metadata:
|
|
||||||
name: longhorn-psp-binding
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: Role
|
|
||||||
name: longhorn-psp-role
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: longhorn-service-account
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: default
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
{{- end }}
|
|
@ -1,13 +0,0 @@
|
|||||||
{{- if .Values.privateRegistry.createSecret }}
|
|
||||||
{{- if .Values.privateRegistry.registrySecret }}
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: {{ .Values.privateRegistry.registrySecret }}
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
type: kubernetes.io/dockerconfigjson
|
|
||||||
data:
|
|
||||||
.dockerconfigjson: {{ template "secret" . }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
@ -2,39 +2,4 @@ apiVersion: v1
|
|||||||
kind: ServiceAccount
|
kind: ServiceAccount
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-service-account
|
name: longhorn-service-account
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
{{- with .Values.serviceAccount.annotations }}
|
|
||||||
annotations:
|
|
||||||
{{- toYaml . | nindent 4 }}
|
|
||||||
{{- end }}
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: longhorn-ui-service-account
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
{{- with .Values.serviceAccount.annotations }}
|
|
||||||
annotations:
|
|
||||||
{{- toYaml . | nindent 4 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.openshift.enabled }}
|
|
||||||
{{- if .Values.openshift.ui.route }}
|
|
||||||
{{- if not .Values.serviceAccount.annotations }}
|
|
||||||
annotations:
|
|
||||||
{{- end }}
|
|
||||||
serviceaccounts.openshift.io/oauth-redirectreference.primary: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"longhorn-ui"}}'
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: longhorn-support-bundle
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
{{- with .Values.serviceAccount.annotations }}
|
|
||||||
annotations:
|
|
||||||
{{- toYaml . | nindent 4 }}
|
|
||||||
{{- end }}
|
|
||||||
|
@ -1,74 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
app: longhorn-conversion-webhook
|
|
||||||
name: longhorn-conversion-webhook
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
spec:
|
|
||||||
type: ClusterIP
|
|
||||||
sessionAffinity: ClientIP
|
|
||||||
selector:
|
|
||||||
app: longhorn-manager
|
|
||||||
ports:
|
|
||||||
- name: conversion-webhook
|
|
||||||
port: 9501
|
|
||||||
targetPort: conversion-wh
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
app: longhorn-admission-webhook
|
|
||||||
name: longhorn-admission-webhook
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
spec:
|
|
||||||
type: ClusterIP
|
|
||||||
sessionAffinity: ClientIP
|
|
||||||
selector:
|
|
||||||
app: longhorn-manager
|
|
||||||
ports:
|
|
||||||
- name: admission-webhook
|
|
||||||
port: 9502
|
|
||||||
targetPort: admission-wh
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
app: longhorn-recovery-backend
|
|
||||||
name: longhorn-recovery-backend
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
spec:
|
|
||||||
type: ClusterIP
|
|
||||||
sessionAffinity: ClientIP
|
|
||||||
selector:
|
|
||||||
app: longhorn-manager
|
|
||||||
ports:
|
|
||||||
- name: recovery-backend
|
|
||||||
port: 9503
|
|
||||||
targetPort: recov-backend
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
name: longhorn-engine-manager
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
spec:
|
|
||||||
clusterIP: None
|
|
||||||
selector:
|
|
||||||
longhorn.io/component: instance-manager
|
|
||||||
longhorn.io/instance-manager-type: engine
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
name: longhorn-replica-manager
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
spec:
|
|
||||||
clusterIP: None
|
|
||||||
selector:
|
|
||||||
longhorn.io/component: instance-manager
|
|
||||||
longhorn.io/instance-manager-type: replica
|
|
@ -1,44 +1,17 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: ConfigMap
|
|
||||||
metadata:
|
|
||||||
name: longhorn-storageclass
|
|
||||||
namespace: {{ include "release_namespace" . }}
|
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
data:
|
|
||||||
storageclass.yaml: |
|
|
||||||
kind: StorageClass
|
kind: StorageClass
|
||||||
apiVersion: storage.k8s.io/v1
|
apiVersion: storage.k8s.io/v1
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn
|
name: longhorn
|
||||||
|
{{- if .Values.persistence.defaultClass }}
|
||||||
annotations:
|
annotations:
|
||||||
storageclass.kubernetes.io/is-default-class: {{ .Values.persistence.defaultClass | quote }}
|
storageclass.beta.kubernetes.io/is-default-class: "true"
|
||||||
|
{{- else }}
|
||||||
|
annotations:
|
||||||
|
storageclass.beta.kubernetes.io/is-default-class: "false"
|
||||||
|
{{- end }}
|
||||||
provisioner: driver.longhorn.io
|
provisioner: driver.longhorn.io
|
||||||
allowVolumeExpansion: true
|
|
||||||
reclaimPolicy: "{{ .Values.persistence.reclaimPolicy }}"
|
|
||||||
volumeBindingMode: Immediate
|
|
||||||
parameters:
|
parameters:
|
||||||
numberOfReplicas: "{{ .Values.persistence.defaultClassReplicaCount }}"
|
numberOfReplicas: "{{ .Values.persistence.defaultClassReplicaCount }}"
|
||||||
staleReplicaTimeout: "30"
|
staleReplicaTimeout: "30"
|
||||||
fromBackup: ""
|
fromBackup: ""
|
||||||
{{- if .Values.persistence.defaultFsType }}
|
baseImage: ""
|
||||||
fsType: "{{ .Values.persistence.defaultFsType }}"
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.persistence.defaultMkfsParams }}
|
|
||||||
mkfsParams: "{{ .Values.persistence.defaultMkfsParams }}"
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.persistence.migratable }}
|
|
||||||
migratable: "{{ .Values.persistence.migratable }}"
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.persistence.backingImage.enable }}
|
|
||||||
backingImage: {{ .Values.persistence.backingImage.name }}
|
|
||||||
backingImageDataSourceType: {{ .Values.persistence.backingImage.dataSourceType }}
|
|
||||||
backingImageDataSourceParameters: {{ .Values.persistence.backingImage.dataSourceParameters }}
|
|
||||||
backingImageChecksum: {{ .Values.persistence.backingImage.expectedChecksum }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.persistence.recurringJobSelector.enable }}
|
|
||||||
recurringJobSelector: '{{ .Values.persistence.recurringJobSelector.jobList }}'
|
|
||||||
{{- end }}
|
|
||||||
dataLocality: {{ .Values.persistence.defaultDataLocality | quote }}
|
|
||||||
{{- if .Values.persistence.defaultNodeSelector.enable }}
|
|
||||||
nodeSelector: "{{ .Values.persistence.defaultNodeSelector.selector }}"
|
|
||||||
{{- end }}
|
|
||||||
|
@ -3,9 +3,8 @@
|
|||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Secret
|
kind: Secret
|
||||||
metadata:
|
metadata:
|
||||||
name: {{ .name }}
|
name: longhorn
|
||||||
namespace: {{ include "release_namespace" $ }}
|
labels:
|
||||||
labels: {{- include "longhorn.labels" $ | nindent 4 }}
|
|
||||||
app: longhorn
|
app: longhorn
|
||||||
type: kubernetes.io/tls
|
type: kubernetes.io/tls
|
||||||
data:
|
data:
|
||||||
|
@ -3,22 +3,20 @@ kind: Job
|
|||||||
metadata:
|
metadata:
|
||||||
annotations:
|
annotations:
|
||||||
"helm.sh/hook": pre-delete
|
"helm.sh/hook": pre-delete
|
||||||
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
|
"helm.sh/hook-delete-policy": hook-succeeded
|
||||||
name: longhorn-uninstall
|
name: longhorn-uninstall
|
||||||
namespace: {{ include "release_namespace" . }}
|
namespace: {{ .Release.Namespace }}
|
||||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
|
||||||
spec:
|
spec:
|
||||||
activeDeadlineSeconds: 900
|
activeDeadlineSeconds: 900
|
||||||
backoffLimit: 1
|
backoffLimit: 1
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-uninstall
|
name: longhorn-uninstall
|
||||||
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: longhorn-uninstall
|
- name: longhorn-uninstall
|
||||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
|
||||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
imagePullPolicy: Always
|
||||||
command:
|
command:
|
||||||
- longhorn-manager
|
- longhorn-manager
|
||||||
- uninstall
|
- uninstall
|
||||||
@ -28,30 +26,5 @@ spec:
|
|||||||
valueFrom:
|
valueFrom:
|
||||||
fieldRef:
|
fieldRef:
|
||||||
fieldPath: metadata.namespace
|
fieldPath: metadata.namespace
|
||||||
restartPolicy: Never
|
restartPolicy: OnFailure
|
||||||
{{- if .Values.privateRegistry.registrySecret }}
|
|
||||||
imagePullSecrets:
|
|
||||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.priorityClass }}
|
|
||||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
|
||||||
{{- end }}
|
|
||||||
serviceAccountName: longhorn-service-account
|
serviceAccountName: longhorn-service-account
|
||||||
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
tolerations:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.longhornManager.tolerations }}
|
|
||||||
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
|
||||||
nodeSelector:
|
|
||||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
|
||||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if or .Values.longhornManager.nodeSelector }}
|
|
||||||
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
@ -1,7 +0,0 @@
|
|||||||
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
|
|
||||||
#{{- if .Values.enablePSP }}
|
|
||||||
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
|
|
||||||
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
|
|
||||||
#{{- end }}
|
|
||||||
#{{- end }}
|
|
||||||
#{{- end }}
|
|
@ -1,434 +1,81 @@
|
|||||||
# Default values for longhorn.
|
# Default values for longhorn.
|
||||||
# This is a YAML-formatted file.
|
# This is a YAML-formatted file.
|
||||||
# Declare variables to be passed into your templates.
|
# Declare variables to be passed into your templates.
|
||||||
global:
|
|
||||||
cattle:
|
|
||||||
# -- System default registry
|
|
||||||
systemDefaultRegistry: ""
|
|
||||||
windowsCluster:
|
|
||||||
# -- Enable this to allow Longhorn to run on the Rancher deployed Windows cluster
|
|
||||||
enabled: false
|
|
||||||
# -- Tolerate Linux nodes to run Longhorn user deployed components
|
|
||||||
tolerations:
|
|
||||||
- key: "cattle.io/os"
|
|
||||||
value: "linux"
|
|
||||||
effect: "NoSchedule"
|
|
||||||
operator: "Equal"
|
|
||||||
# -- Select Linux nodes to run Longhorn user deployed components
|
|
||||||
nodeSelector:
|
|
||||||
kubernetes.io/os: "linux"
|
|
||||||
defaultSetting:
|
|
||||||
# -- Toleration for Longhorn system managed components
|
|
||||||
taintToleration: cattle.io/os=linux:NoSchedule
|
|
||||||
# -- Node selector for Longhorn system managed components
|
|
||||||
systemManagedComponentsNodeSelector: kubernetes.io/os:linux
|
|
||||||
|
|
||||||
networkPolicies:
|
|
||||||
# -- Enable NetworkPolicies to limit access to the Longhorn pods
|
|
||||||
enabled: false
|
|
||||||
# -- Create the policy based on your distribution to allow access for the ingress. Options: `k3s`, `rke2`, `rke1`
|
|
||||||
type: "k3s"
|
|
||||||
|
|
||||||
image:
|
image:
|
||||||
longhorn:
|
longhorn:
|
||||||
engine:
|
engine: longhornio/longhorn-engine
|
||||||
# -- Specify Longhorn engine image repository
|
engineTag: v0.8.0
|
||||||
repository: longhornio/longhorn-engine
|
manager: longhornio/longhorn-manager
|
||||||
# -- Specify Longhorn engine image tag
|
managerTag: v0.8.0
|
||||||
tag: master-head
|
ui: longhornio/longhorn-ui
|
||||||
manager:
|
uiTag: v0.8.0
|
||||||
# -- Specify Longhorn manager image repository
|
instanceManager: longhornio/longhorn-instance-manager
|
||||||
repository: longhornio/longhorn-manager
|
instanceManagerTag: v1_20200301
|
||||||
# -- Specify Longhorn manager image tag
|
|
||||||
tag: master-head
|
|
||||||
ui:
|
|
||||||
# -- Specify Longhorn ui image repository
|
|
||||||
repository: longhornio/longhorn-ui
|
|
||||||
# -- Specify Longhorn ui image tag
|
|
||||||
tag: master-head
|
|
||||||
instanceManager:
|
|
||||||
# -- Specify Longhorn instance manager image repository
|
|
||||||
repository: longhornio/longhorn-instance-manager
|
|
||||||
# -- Specify Longhorn instance manager image tag
|
|
||||||
tag: master-head
|
|
||||||
shareManager:
|
|
||||||
# -- Specify Longhorn share manager image repository
|
|
||||||
repository: longhornio/longhorn-share-manager
|
|
||||||
# -- Specify Longhorn share manager image tag
|
|
||||||
tag: master-head
|
|
||||||
backingImageManager:
|
|
||||||
# -- Specify Longhorn backing image manager image repository
|
|
||||||
repository: longhornio/backing-image-manager
|
|
||||||
# -- Specify Longhorn backing image manager image tag
|
|
||||||
tag: master-head
|
|
||||||
supportBundleKit:
|
|
||||||
# -- Specify Longhorn support bundle manager image repository
|
|
||||||
repository: longhornio/support-bundle-kit
|
|
||||||
# -- Specify Longhorn support bundle manager image tag
|
|
||||||
tag: v0.0.27
|
|
||||||
csi:
|
|
||||||
attacher:
|
|
||||||
# -- Specify CSI attacher image repository. Leave blank to autodetect
|
|
||||||
repository: longhornio/csi-attacher
|
|
||||||
# -- Specify CSI attacher image tag. Leave blank to autodetect
|
|
||||||
tag: v4.2.0
|
|
||||||
provisioner:
|
|
||||||
# -- Specify CSI provisioner image repository. Leave blank to autodetect
|
|
||||||
repository: longhornio/csi-provisioner
|
|
||||||
# -- Specify CSI provisioner image tag. Leave blank to autodetect
|
|
||||||
tag: v3.4.1
|
|
||||||
nodeDriverRegistrar:
|
|
||||||
# -- Specify CSI node driver registrar image repository. Leave blank to autodetect
|
|
||||||
repository: longhornio/csi-node-driver-registrar
|
|
||||||
# -- Specify CSI node driver registrar image tag. Leave blank to autodetect
|
|
||||||
tag: v2.7.0
|
|
||||||
resizer:
|
|
||||||
# -- Specify CSI driver resizer image repository. Leave blank to autodetect
|
|
||||||
repository: longhornio/csi-resizer
|
|
||||||
# -- Specify CSI driver resizer image tag. Leave blank to autodetect
|
|
||||||
tag: v1.7.0
|
|
||||||
snapshotter:
|
|
||||||
# -- Specify CSI driver snapshotter image repository. Leave blank to autodetect
|
|
||||||
repository: longhornio/csi-snapshotter
|
|
||||||
# -- Specify CSI driver snapshotter image tag. Leave blank to autodetect.
|
|
||||||
tag: v6.2.1
|
|
||||||
livenessProbe:
|
|
||||||
# -- Specify CSI liveness probe image repository. Leave blank to autodetect
|
|
||||||
repository: longhornio/livenessprobe
|
|
||||||
# -- Specify CSI liveness probe image tag. Leave blank to autodetect
|
|
||||||
tag: v2.9.0
|
|
||||||
openshift:
|
|
||||||
oauthProxy:
|
|
||||||
# -- For openshift user. Specify oauth proxy image repository
|
|
||||||
repository: quay.io/openshift/origin-oauth-proxy
|
|
||||||
# -- For openshift user. Specify oauth proxy image tag. Note: Use your OCP/OKD 4.X Version, Current Stable is 4.14
|
|
||||||
tag: 4.14
|
|
||||||
# -- Image pull policy which applies to all user deployed Longhorn Components. e.g, Longhorn manager, Longhorn driver, Longhorn UI
|
|
||||||
pullPolicy: IfNotPresent
|
pullPolicy: IfNotPresent
|
||||||
|
|
||||||
service:
|
service:
|
||||||
ui:
|
ui:
|
||||||
# -- Define Longhorn UI service type. Options: `ClusterIP`, `NodePort`, `LoadBalancer`, `Rancher-Proxy`
|
|
||||||
type: ClusterIP
|
type: ClusterIP
|
||||||
# -- NodePort port number (to set explicitly, choose port between 30000-32767)
|
|
||||||
nodePort: null
|
nodePort: null
|
||||||
manager:
|
manager:
|
||||||
# -- Define Longhorn manager service type.
|
|
||||||
type: ClusterIP
|
type: ClusterIP
|
||||||
# -- NodePort port number (to set explicitly, choose port between 30000-32767)
|
|
||||||
nodePort: ""
|
nodePort: ""
|
||||||
|
|
||||||
persistence:
|
persistence:
|
||||||
# -- Set Longhorn StorageClass as default
|
|
||||||
defaultClass: true
|
defaultClass: true
|
||||||
# -- Set filesystem type for Longhorn StorageClass
|
|
||||||
defaultFsType: ext4
|
|
||||||
# -- Set mkfs options for Longhorn StorageClass
|
|
||||||
defaultMkfsParams: ""
|
|
||||||
# -- Set replica count for Longhorn StorageClass
|
|
||||||
defaultClassReplicaCount: 3
|
defaultClassReplicaCount: 3
|
||||||
# -- Set data locality for Longhorn StorageClass. Options: `disabled`, `best-effort`
|
|
||||||
defaultDataLocality: disabled
|
|
||||||
# -- Define reclaim policy. Options: `Retain`, `Delete`
|
|
||||||
reclaimPolicy: Delete
|
|
||||||
# -- Set volume migratable for Longhorn StorageClass
|
|
||||||
migratable: false
|
|
||||||
recurringJobSelector:
|
|
||||||
# -- Enable recurring job selector for Longhorn StorageClass
|
|
||||||
enable: false
|
|
||||||
# -- Recurring job selector list for Longhorn StorageClass. Please be careful of quotes of input. e.g., `[{"name":"backup", "isGroup":true}]`
|
|
||||||
jobList: []
|
|
||||||
backingImage:
|
|
||||||
# -- Set backing image for Longhorn StorageClass
|
|
||||||
enable: false
|
|
||||||
# -- Specify a backing image that will be used by Longhorn volumes in Longhorn StorageClass. If not exists, the backing image data source type and backing image data source parameters should be specified so that Longhorn will create the backing image before using it
|
|
||||||
name: ~
|
|
||||||
# -- Specify the data source type for the backing image used in Longhorn StorageClass.
|
|
||||||
# If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image.
|
|
||||||
dataSourceType: ~
|
|
||||||
# -- Specify the data source parameters for the backing image used in Longhorn StorageClass. This option accepts a json string of a map. e.g., `'{\"url\":\"https://backing-image-example.s3-region.amazonaws.com/test-backing-image\"}'`.
|
|
||||||
dataSourceParameters: ~
|
|
||||||
# -- Specify the expected SHA512 checksum of the selected backing image in Longhorn StorageClass
|
|
||||||
expectedChecksum: ~
|
|
||||||
defaultNodeSelector:
|
|
||||||
# -- Enable Node selector for Longhorn StorageClass
|
|
||||||
enable: false
|
|
||||||
# -- This selector enables only certain nodes having these tags to be used for the volume. e.g. `"storage,fast"`
|
|
||||||
selector: ""
|
|
||||||
# -- Allow automatically removing snapshots during filesystem trim for Longhorn StorageClass. Options: `ignored`, `enabled`, `disabled`
|
|
||||||
removeSnapshotsDuringFilesystemTrim: ignored
|
|
||||||
|
|
||||||
helmPreUpgradeCheckerJob:
|
|
||||||
enabled: true
|
|
||||||
|
|
||||||
csi:
|
csi:
|
||||||
# -- Specify kubelet root-dir. Leave blank to autodetect
|
attacherImage:
|
||||||
kubeletRootDir: ~
|
provisionerImage:
|
||||||
# -- Specify replica count of CSI Attacher. Leave blank to use default count: 3
|
driverRegistrarImage:
|
||||||
attacherReplicaCount: ~
|
kubeletRootDir:
|
||||||
# -- Specify replica count of CSI Provisioner. Leave blank to use default count: 3
|
attacherReplicaCount:
|
||||||
provisionerReplicaCount: ~
|
provisionerReplicaCount:
|
||||||
# -- Specify replica count of CSI Resizer. Leave blank to use default count: 3
|
|
||||||
resizerReplicaCount: ~
|
|
||||||
# -- Specify replica count of CSI Snapshotter. Leave blank to use default count: 3
|
|
||||||
snapshotterReplicaCount: ~
|
|
||||||
|
|
||||||
defaultSettings:
|
defaultSettings:
|
||||||
# -- The endpoint used to access the backupstore. Available: NFS, CIFS, AWS, GCP, AZURE.
|
backupTarget:
|
||||||
backupTarget: ~
|
backupTargetCredentialSecret:
|
||||||
# -- The name of the Kubernetes secret associated with the backup target.
|
createDefaultDiskLabeledNodes:
|
||||||
backupTargetCredentialSecret: ~
|
defaultDataPath:
|
||||||
# -- If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup
|
replicaSoftAntiAffinity:
|
||||||
# when it is the time to do recurring snapshot/backup.
|
storageOverProvisioningPercentage:
|
||||||
allowRecurringJobWhileVolumeDetached: ~
|
storageMinimalAvailablePercentage:
|
||||||
# -- Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other disks exist.
|
upgradeChecker:
|
||||||
# If disabled, the default disk will be created on all new nodes when each node is first added.
|
defaultReplicaCount:
|
||||||
createDefaultDiskLabeledNodes: ~
|
guaranteedEngineCPU:
|
||||||
# -- Default path to use for storing data on a host. By default "/var/lib/longhorn/"
|
defaultLonghornStaticStorageClass:
|
||||||
defaultDataPath: ~
|
backupstorePollInterval:
|
||||||
# -- Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume.
|
taintToleration:
|
||||||
defaultDataLocality: ~
|
|
||||||
# -- Allow scheduling on nodes with existing healthy replicas of the same volume. By default false.
|
|
||||||
replicaSoftAntiAffinity: ~
|
|
||||||
# -- Enable this setting automatically rebalances replicas when discovered an available node.
|
|
||||||
replicaAutoBalance: ~
|
|
||||||
# -- The over-provisioning percentage defines how much storage can be allocated relative to the hard drive's capacity. By default 200.
|
|
||||||
storageOverProvisioningPercentage: ~
|
|
||||||
# -- If the minimum available disk capacity exceeds the actual percentage of available disk capacity,
|
|
||||||
# the disk becomes unschedulable until more space is freed up. By default 25.
|
|
||||||
storageMinimalAvailablePercentage: ~
|
|
||||||
# -- The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node.
|
|
||||||
storageReservedPercentageForDefaultDisk: ~
|
|
||||||
# -- Upgrade Checker will check for new Longhorn version periodically.
|
|
||||||
# When there is a new version available, a notification will appear in the UI. By default true.
|
|
||||||
upgradeChecker: ~
|
|
||||||
# -- The default number of replicas when a volume is created from the Longhorn UI.
|
|
||||||
# For Kubernetes configuration, update the `numberOfReplicas` in the StorageClass. By default 3.
|
|
||||||
defaultReplicaCount: ~
|
|
||||||
# -- The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label,
|
|
||||||
# so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object.
|
|
||||||
# By default 'longhorn-static'.
|
|
||||||
defaultLonghornStaticStorageClass: ~
|
|
||||||
# -- In seconds. The backupstore poll interval determines how often Longhorn checks the backupstore for new backups.
|
|
||||||
# Set to 0 to disable the polling. By default 300.
|
|
||||||
backupstorePollInterval: ~
|
|
||||||
# -- In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion.
|
|
||||||
failedBackupTTL: ~
|
|
||||||
# -- Restore recurring jobs from the backup volume on the backup target and create recurring jobs if not exist during a backup restoration.
|
|
||||||
restoreVolumeRecurringJobs: ~
|
|
||||||
# -- This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0.
|
|
||||||
recurringSuccessfulJobsHistoryLimit: ~
|
|
||||||
# -- This setting specifies how many failed backup or snapshot job histories should be retained. History will not be retained if the value is 0.
|
|
||||||
recurringFailedJobsHistoryLimit: ~
|
|
||||||
# -- This setting specifies how many failed support bundles can exist in the cluster.
|
|
||||||
# Set this value to **0** to have Longhorn automatically purge all failed support bundles.
|
|
||||||
supportBundleFailedHistoryLimit: ~
|
|
||||||
# -- taintToleration for longhorn system components
|
|
||||||
taintToleration: ~
|
|
||||||
# -- nodeSelector for longhorn system components
|
|
||||||
systemManagedComponentsNodeSelector: ~
|
|
||||||
# -- priorityClass for longhorn system componentss
|
|
||||||
priorityClass: ~
|
|
||||||
# -- If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection.
|
|
||||||
# Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true.
|
|
||||||
autoSalvage: ~
|
|
||||||
# -- If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...)
|
|
||||||
# when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect).
|
|
||||||
# By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount.
|
|
||||||
autoDeletePodWhenVolumeDetachedUnexpectedly: ~
|
|
||||||
# -- Disable Longhorn manager to schedule replica on Kubernetes cordoned node. By default true.
|
|
||||||
disableSchedulingOnCordonedNode: ~
|
|
||||||
# -- Allow scheduling new Replicas of Volume to the Nodes in the same Zone as existing healthy Replicas.
|
|
||||||
# Nodes don't belong to any Zone will be treated as in the same Zone.
|
|
||||||
# Notice that Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` in the Kubernetes node object to identify the zone.
|
|
||||||
# By default true.
|
|
||||||
replicaZoneSoftAntiAffinity: ~
|
|
||||||
# -- Allow scheduling on disks with existing healthy replicas of the same volume. By default true.
|
|
||||||
replicaDiskSoftAntiAffinity: ~
|
|
||||||
# -- Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down.
|
|
||||||
nodeDownPodDeletionPolicy: ~
|
|
||||||
# -- Define the policy to use when a node with the last healthy replica of a volume is drained.
|
|
||||||
nodeDrainPolicy: ~
|
|
||||||
# -- In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica
|
|
||||||
# rather than directly creating a new replica for a degraded volume.
|
|
||||||
replicaReplenishmentWaitInterval: ~
|
|
||||||
# -- This setting controls how many replicas on a node can be rebuilt simultaneously.
|
|
||||||
concurrentReplicaRebuildPerNodeLimit: ~
|
|
||||||
# -- This setting controls how many volumes on a node can restore the backup concurrently. Set the value to **0** to disable backup restore.
|
|
||||||
concurrentVolumeBackupRestorePerNodeLimit: ~
|
|
||||||
# -- This setting is only for volumes created by UI.
|
|
||||||
# By default, this is false meaning there will be a reivision counter file to track every write to the volume.
|
|
||||||
# During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume.
|
|
||||||
# If revision counter is disabled, Longhorn will not track every write to the volume.
|
|
||||||
# During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and
|
|
||||||
# file size to pick the replica candidate to recover the whole volume.
|
|
||||||
disableRevisionCounter: ~
|
|
||||||
# -- This setting defines the Image Pull Policy of Longhorn system managed pod.
|
|
||||||
# e.g. instance manager, engine image, CSI driver, etc.
|
|
||||||
# The new Image Pull Policy will only apply after the system managed pods restart.
|
|
||||||
systemManagedPodsImagePullPolicy: ~
|
|
||||||
# -- This setting allows user to create and attach a volume that doesn't have all the replicas scheduled at the time of creation.
|
|
||||||
allowVolumeCreationWithDegradedAvailability: ~
|
|
||||||
# -- This setting enables Longhorn to automatically cleanup the system generated snapshot after replica rebuild is done.
|
|
||||||
autoCleanupSystemGeneratedSnapshot: ~
|
|
||||||
# -- This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager.
|
|
||||||
# The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time.
|
|
||||||
# If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version.
|
|
||||||
concurrentAutomaticEngineUpgradePerNodeLimit: ~
|
|
||||||
# -- This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it.
|
|
||||||
backingImageCleanupWaitInterval: ~
|
|
||||||
# -- This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file
|
|
||||||
# when all disk files of this backing image become failed or unknown.
|
|
||||||
backingImageRecoveryWaitInterval: ~
|
|
||||||
# -- This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod.
|
|
||||||
# You can leave it with the default value, which is 12%.
|
|
||||||
guaranteedInstanceManagerCPU: ~
|
|
||||||
# -- Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler.
|
|
||||||
kubernetesClusterAutoscalerEnabled: ~
|
|
||||||
# -- This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas.
|
|
||||||
# Orphan resources on down or unknown nodes will not be cleaned up automatically.
|
|
||||||
orphanAutoDeletion: ~
|
|
||||||
# -- Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network.
|
|
||||||
storageNetwork: ~
|
|
||||||
# -- This flag is designed to prevent Longhorn from being accidentally uninstalled which will lead to data lost.
|
|
||||||
deletingConfirmationFlag: ~
|
|
||||||
# -- In seconds. The setting specifies the timeout between the engine and replica(s), and the value should be between 8 to 30 seconds.
|
|
||||||
# The default value is 8 seconds.
|
|
||||||
engineReplicaTimeout: ~
|
|
||||||
# -- This setting allows users to enable or disable snapshot hashing and data integrity checking.
|
|
||||||
snapshotDataIntegrity: ~
|
|
||||||
# -- Hashing snapshot disk files impacts the performance of the system.
|
|
||||||
# The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot.
|
|
||||||
snapshotDataIntegrityImmediateCheckAfterSnapshotCreation: ~
|
|
||||||
# -- Unix-cron string format. The setting specifies when Longhorn checks the data integrity of snapshot disk files.
|
|
||||||
snapshotDataIntegrityCronjob: ~
|
|
||||||
# -- This setting allows Longhorn filesystem trim feature to automatically mark the latest snapshot and
|
|
||||||
# its ancestors as removed and stops at the snapshot containing multiple children.
|
|
||||||
removeSnapshotsDuringFilesystemTrim: ~
|
|
||||||
# -- This feature supports the fast replica rebuilding.
|
|
||||||
# It relies on the checksum of snapshot disk files, so setting the snapshot-data-integrity to **enable** or **fast-check** is a prerequisite.
|
|
||||||
fastReplicaRebuildEnabled: ~
|
|
||||||
# -- In seconds. The setting specifies the HTTP client timeout to the file sync server.
|
|
||||||
replicaFileSyncHttpClientTimeout: ~
|
|
||||||
# -- The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. Default to Info.
|
|
||||||
logLevel: ~
|
|
||||||
# -- This setting allows users to specify backup compression method.
|
|
||||||
backupCompressionMethod: ~
|
|
||||||
# -- This setting controls how many worker threads per backup concurrently.
|
|
||||||
backupConcurrentLimit: ~
|
|
||||||
# -- This setting controls how many worker threads per restore concurrently.
|
|
||||||
restoreConcurrentLimit: ~
|
|
||||||
# -- This allows users to activate v2 data engine based on SPDK.
|
|
||||||
# Currently, it is in the preview phase and should not be utilized in a production environment.
|
|
||||||
v2DataEngine: ~
|
|
||||||
# -- This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine.
|
|
||||||
offlineReplicaRebuilding: ~
|
|
||||||
# -- Allow Scheduling Empty Node Selector Volumes To Any Node
|
|
||||||
allowEmptyNodeSelectorVolume: ~
|
|
||||||
# -- Allow Scheduling Empty Disk Selector Volumes To Any Disk
|
|
||||||
allowEmptyDiskSelectorVolume: ~
|
|
||||||
|
|
||||||
privateRegistry:
|
resources: {}
|
||||||
# -- Set `true` to create a new private registry secret
|
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||||
createSecret: ~
|
# choice for the user. This also increases chances charts run on environments with little
|
||||||
# -- URL of private registry. Leave blank to apply system default registry
|
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||||
registryUrl: ~
|
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||||
# -- User used to authenticate to private registry
|
# limits:
|
||||||
registryUser: ~
|
# cpu: 100m
|
||||||
# -- Password used to authenticate to private registry
|
# memory: 128Mi
|
||||||
registryPasswd: ~
|
# requests:
|
||||||
# -- If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry
|
# cpu: 100m
|
||||||
registrySecret: ~
|
# memory: 128Mi
|
||||||
|
#
|
||||||
longhornManager:
|
|
||||||
log:
|
|
||||||
# -- Options: `plain`, `json`
|
|
||||||
format: plain
|
|
||||||
# -- Priority class for longhorn manager
|
|
||||||
priorityClass: ~
|
|
||||||
# -- Tolerate nodes to run Longhorn manager
|
|
||||||
tolerations: []
|
|
||||||
## If you want to set tolerations for Longhorn Manager DaemonSet, delete the `[]` in the line above
|
|
||||||
## and uncomment this example block
|
|
||||||
# - key: "key"
|
|
||||||
# operator: "Equal"
|
|
||||||
# value: "value"
|
|
||||||
# effect: "NoSchedule"
|
|
||||||
# -- Select nodes to run Longhorn manager
|
|
||||||
nodeSelector: {}
|
|
||||||
## If you want to set node selector for Longhorn Manager DaemonSet, delete the `{}` in the line above
|
|
||||||
## and uncomment this example block
|
|
||||||
# label-key1: "label-value1"
|
|
||||||
# label-key2: "label-value2"
|
|
||||||
# -- Annotation used in Longhorn manager service
|
|
||||||
serviceAnnotations: {}
|
|
||||||
## If you want to set annotations for the Longhorn Manager service, delete the `{}` in the line above
|
|
||||||
## and uncomment this example block
|
|
||||||
# annotation-key1: "annotation-value1"
|
|
||||||
# annotation-key2: "annotation-value2"
|
|
||||||
|
|
||||||
longhornDriver:
|
|
||||||
# -- Priority class for longhorn driver
|
|
||||||
priorityClass: ~
|
|
||||||
# -- Tolerate nodes to run Longhorn driver
|
|
||||||
tolerations: []
|
|
||||||
## If you want to set tolerations for Longhorn Driver Deployer Deployment, delete the `[]` in the line above
|
|
||||||
## and uncomment this example block
|
|
||||||
# - key: "key"
|
|
||||||
# operator: "Equal"
|
|
||||||
# value: "value"
|
|
||||||
# effect: "NoSchedule"
|
|
||||||
# -- Select nodes to run Longhorn driver
|
|
||||||
nodeSelector: {}
|
|
||||||
## If you want to set node selector for Longhorn Driver Deployer Deployment, delete the `{}` in the line above
|
|
||||||
## and uncomment this example block
|
|
||||||
# label-key1: "label-value1"
|
|
||||||
# label-key2: "label-value2"
|
|
||||||
|
|
||||||
longhornUI:
|
|
||||||
# -- Replica count for longhorn ui
|
|
||||||
replicas: 2
|
|
||||||
# -- Priority class count for longhorn ui
|
|
||||||
priorityClass: ~
|
|
||||||
# -- Tolerate nodes to run Longhorn UI
|
|
||||||
tolerations: []
|
|
||||||
## If you want to set tolerations for Longhorn UI Deployment, delete the `[]` in the line above
|
|
||||||
## and uncomment this example block
|
|
||||||
# - key: "key"
|
|
||||||
# operator: "Equal"
|
|
||||||
# value: "value"
|
|
||||||
# effect: "NoSchedule"
|
|
||||||
# -- Select nodes to run Longhorn UI
|
|
||||||
nodeSelector: {}
|
|
||||||
## If you want to set node selector for Longhorn UI Deployment, delete the `{}` in the line above
|
|
||||||
## and uncomment this example block
|
|
||||||
# label-key1: "label-value1"
|
|
||||||
# label-key2: "label-value2"
|
|
||||||
|
|
||||||
ingress:
|
ingress:
|
||||||
# -- Set to true to enable ingress record generation
|
## Set to true to enable ingress record generation
|
||||||
enabled: false
|
enabled: false
|
||||||
|
|
||||||
# -- Add ingressClassName to the Ingress
|
|
||||||
# Can replace the kubernetes.io/ingress.class annotation on v1.18+
|
|
||||||
ingressClassName: ~
|
|
||||||
|
|
||||||
# -- Layer 7 Load Balancer hostname
|
host: xip.io
|
||||||
host: sslip.io
|
|
||||||
|
|
||||||
# -- Set this to true in order to enable TLS on the ingress record
|
## Set this to true in order to enable TLS on the ingress record
|
||||||
|
## A side effect of this will be that the backend service will be connected at port 443
|
||||||
tls: false
|
tls: false
|
||||||
|
|
||||||
# -- Enable this in order to enable that the backend service will be connected at port 443
|
## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
|
||||||
secureBackends: false
|
|
||||||
|
|
||||||
# -- If TLS is set to true, you must declare what secret will store the key/certificate for TLS
|
|
||||||
tlsSecret: longhorn.local-tls
|
tlsSecret: longhorn.local-tls
|
||||||
|
|
||||||
# -- If ingress is enabled you can set the default ingress path
|
## Ingress annotations done as key:value pairs
|
||||||
# then you can access the UI by using the following full path {{host}}+{{path}}
|
|
||||||
path: /
|
|
||||||
|
|
||||||
## If you're using kube-lego, you will want to add:
|
## If you're using kube-lego, you will want to add:
|
||||||
## kubernetes.io/tls-acme: true
|
## kubernetes.io/tls-acme: true
|
||||||
##
|
##
|
||||||
@ -436,12 +83,10 @@ ingress:
|
|||||||
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
|
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
|
||||||
##
|
##
|
||||||
## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
|
## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
|
||||||
# -- Ingress annotations done as key:value pairs
|
|
||||||
annotations:
|
annotations:
|
||||||
# kubernetes.io/ingress.class: nginx
|
# kubernetes.io/ingress.class: nginx
|
||||||
# kubernetes.io/tls-acme: true
|
# kubernetes.io/tls-acme: true
|
||||||
|
|
||||||
# -- If you're providing your own certificates, please use this to add the certificates as secrets
|
|
||||||
secrets:
|
secrets:
|
||||||
## If you're providing your own certificates, please use this to add the certificates as secrets
|
## If you're providing your own certificates, please use this to add the certificates as secrets
|
||||||
## key and certificate should start with -----BEGIN CERTIFICATE----- or
|
## key and certificate should start with -----BEGIN CERTIFICATE----- or
|
||||||
@ -455,26 +100,3 @@ ingress:
|
|||||||
# - name: longhorn.local-tls
|
# - name: longhorn.local-tls
|
||||||
# key:
|
# key:
|
||||||
# certificate:
|
# certificate:
|
||||||
|
|
||||||
# -- For Kubernetes < v1.25, if your cluster enables Pod Security Policy admission controller,
|
|
||||||
# set this to `true` to ship longhorn-psp which allow privileged Longhorn pods to start
|
|
||||||
enablePSP: false
|
|
||||||
|
|
||||||
# -- Annotations to add to the Longhorn Manager DaemonSet Pods. Optional.
|
|
||||||
annotations: {}
|
|
||||||
|
|
||||||
serviceAccount:
|
|
||||||
# -- Annotations to add to the service account
|
|
||||||
annotations: {}
|
|
||||||
|
|
||||||
## openshift settings
|
|
||||||
openshift:
|
|
||||||
# -- Enable when using openshift
|
|
||||||
enabled: false
|
|
||||||
ui:
|
|
||||||
# -- UI route in openshift environment
|
|
||||||
route: "longhorn-ui"
|
|
||||||
# -- UI port in openshift environment
|
|
||||||
port: 443
|
|
||||||
# -- UI proxy in openshift environment
|
|
||||||
proxy: 8443
|
|
||||||
|
@ -1,48 +0,0 @@
|
|||||||
# same secret for longhorn-system namespace
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: azblob-secret
|
|
||||||
namespace: longhorn-system
|
|
||||||
type: Opaque
|
|
||||||
data:
|
|
||||||
AZBLOB_ACCOUNT_NAME: ZGV2c3RvcmVhY2NvdW50MQ==
|
|
||||||
AZBLOB_ACCOUNT_KEY: RWJ5OHZkTTAyeE5PY3FGbHFVd0pQTGxtRXRsQ0RYSjFPVXpGVDUwdVNSWjZJRnN1RnEyVVZFckN6NEk2dHEvSzFTWkZQVE90ci9LQkhCZWtzb0dNR3c9PQ==
|
|
||||||
AZBLOB_ENDPOINT: aHR0cDovL2F6YmxvYi1zZXJ2aWNlLmRlZmF1bHQ6MTAwMDAv
|
|
||||||
---
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: longhorn-test-azblob
|
|
||||||
namespace: default
|
|
||||||
labels:
|
|
||||||
app: longhorn-test-azblob
|
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-test-azblob
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-test-azblob
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: azurite
|
|
||||||
image: mcr.microsoft.com/azure-storage/azurite:3.23.0
|
|
||||||
ports:
|
|
||||||
- containerPort: 10000
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: azblob-service
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
app: longhorn-test-azblob
|
|
||||||
ports:
|
|
||||||
- port: 10000
|
|
||||||
targetPort: 10000
|
|
||||||
protocol: TCP
|
|
||||||
sessionAffinity: ClientIP
|
|
@ -1,87 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: cifs-secret
|
|
||||||
namespace: longhorn-system
|
|
||||||
type: Opaque
|
|
||||||
data:
|
|
||||||
CIFS_USERNAME: bG9uZ2hvcm4tY2lmcy11c2VybmFtZQ== # longhorn-cifs-username
|
|
||||||
CIFS_PASSWORD: bG9uZ2hvcm4tY2lmcy1wYXNzd29yZA== # longhorn-cifs-password
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: cifs-secret
|
|
||||||
namespace: default
|
|
||||||
type: Opaque
|
|
||||||
data:
|
|
||||||
CIFS_USERNAME: bG9uZ2hvcm4tY2lmcy11c2VybmFtZQ== # longhorn-cifs-username
|
|
||||||
CIFS_PASSWORD: bG9uZ2hvcm4tY2lmcy1wYXNzd29yZA== # longhorn-cifs-password
|
|
||||||
---
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: longhorn-test-cifs
|
|
||||||
namespace: default
|
|
||||||
labels:
|
|
||||||
app: longhorn-test-cifs
|
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-test-cifs
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-test-cifs
|
|
||||||
spec:
|
|
||||||
volumes:
|
|
||||||
- name: cifs-volume
|
|
||||||
emptyDir: {}
|
|
||||||
containers:
|
|
||||||
- name: longhorn-test-cifs-container
|
|
||||||
image: derekbit/samba:latest
|
|
||||||
ports:
|
|
||||||
- containerPort: 139
|
|
||||||
- containerPort: 445
|
|
||||||
imagePullPolicy: Always
|
|
||||||
env:
|
|
||||||
- name: EXPORT_PATH
|
|
||||||
value: /opt/backupstore
|
|
||||||
- name: CIFS_DISK_IMAGE_SIZE_MB
|
|
||||||
value: "4096"
|
|
||||||
- name: CIFS_USERNAME
|
|
||||||
valueFrom:
|
|
||||||
secretKeyRef:
|
|
||||||
name: cifs-secret
|
|
||||||
key: CIFS_USERNAME
|
|
||||||
- name: CIFS_PASSWORD
|
|
||||||
valueFrom:
|
|
||||||
secretKeyRef:
|
|
||||||
name: cifs-secret
|
|
||||||
key: CIFS_PASSWORD
|
|
||||||
securityContext:
|
|
||||||
privileged: true
|
|
||||||
capabilities:
|
|
||||||
add: ["SYS_ADMIN", "DAC_READ_SEARCH"]
|
|
||||||
volumeMounts:
|
|
||||||
- name: cifs-volume
|
|
||||||
mountPath: "/opt/backupstore"
|
|
||||||
args: ["-u", "$(CIFS_USERNAME);$(CIFS_PASSWORD)", "-s", "backupstore;$(EXPORT_PATH);yes;no;no;all;none"]
|
|
||||||
---
|
|
||||||
kind: Service
|
|
||||||
apiVersion: v1
|
|
||||||
metadata:
|
|
||||||
name: longhorn-test-cifs-svc
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
app: longhorn-test-cifs
|
|
||||||
clusterIP: None
|
|
||||||
ports:
|
|
||||||
- name: netbios-port
|
|
||||||
port: 139
|
|
||||||
targetPort: 139
|
|
||||||
- name: microsoft-port
|
|
||||||
port: 445
|
|
||||||
targetPort: 445
|
|
@ -7,9 +7,7 @@ type: Opaque
|
|||||||
data:
|
data:
|
||||||
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
|
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
|
||||||
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
|
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
|
||||||
AWS_ENDPOINTS: aHR0cHM6Ly9taW5pby1zZXJ2aWNlLmRlZmF1bHQ6OTAwMA== # https://minio-service.default:9000
|
AWS_ENDPOINTS: aHR0cDovL21pbmlvLXNlcnZpY2UuZGVmYXVsdDo5MDAw # http://minio-service.default:9000
|
||||||
AWS_CERT: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMRENDQWhTZ0F3SUJBZ0lSQU1kbzQycGhUZXlrMTcvYkxyWjVZRHN3RFFZSktvWklodmNOQVFFTEJRQXcKR2pFWU1CWUdBMVVFQ2hNUFRHOXVaMmh2Y200Z0xTQlVaWE4wTUNBWERUSXdNRFF5TnpJek1EQXhNVm9ZRHpJeApNakF3TkRBek1qTXdNREV4V2pBYU1SZ3dGZ1lEVlFRS0V3OU1iMjVuYUc5eWJpQXRJRlJsYzNRd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEWHpVdXJnUFpEZ3pUM0RZdWFlYmdld3Fvd2RlQUQKODRWWWF6ZlN1USs3K21Oa2lpUVBvelVVMmZvUWFGL1BxekJiUW1lZ29hT3l5NVhqM1VFeG1GcmV0eDBaRjVOVgpKTi85ZWFJNWRXRk9teHhpMElPUGI2T0RpbE1qcXVEbUVPSXljdjRTaCsvSWo5Zk1nS0tXUDdJZGxDNUJPeThkCncwOVdkckxxaE9WY3BKamNxYjN6K3hISHd5Q05YeGhoRm9tb2xQVnpJbnlUUEJTZkRuSDBuS0lHUXl2bGhCMGsKVHBHSzYxc2prZnFTK3hpNTlJeHVrbHZIRXNQcjFXblRzYU9oaVh6N3lQSlorcTNBMWZoVzBVa1JaRFlnWnNFbQovZ05KM3JwOFhZdURna2kzZ0UrOElXQWRBWHExeWhqRDdSSkI4VFNJYTV0SGpKUUtqZ0NlSG5HekFnTUJBQUdqCmF6QnBNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFQQmdOVkhSTUIKQWY4RUJUQURBUUgvTURFR0ExVWRFUVFxTUNpQ0NXeHZZMkZzYUc5emRJSVZiV2x1YVc4dGMyVnlkbWxqWlM1awpaV1poZFd4MGh3Ui9BQUFCTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDbUZMMzlNSHVZMzFhMTFEajRwMjVjCnFQRUM0RHZJUWozTk9kU0dWMmQrZjZzZ3pGejFXTDhWcnF2QjFCMVM2cjRKYjJQRXVJQkQ4NFlwVXJIT1JNU2MKd3ViTEppSEtEa0Jmb2U5QWI1cC9VakpyS0tuajM0RGx2c1cvR3AwWTZYc1BWaVdpVWorb1JLbUdWSTI0Q0JIdgpnK0JtVzNDeU5RR1RLajk0eE02czNBV2xHRW95YXFXUGU1eHllVWUzZjFBWkY5N3RDaklKUmVWbENtaENGK0JtCmFUY1RSUWN3cVdvQ3AwYmJZcHlERFlwUmxxOEdQbElFOW8yWjZBc05mTHJVcGFtZ3FYMmtYa2gxa3lzSlEralAKelFadHJSMG1tdHVyM0RuRW0yYmk0TktIQVFIcFc5TXUxNkdRakUxTmJYcVF0VEI4OGpLNzZjdEg5MzRDYWw2VgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t
|
|
||||||
AWS_CERT_KEY: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRFh6VXVyZ1BaRGd6VDMKRFl1YWViZ2V3cW93ZGVBRDg0VllhemZTdVErNyttTmtpaVFQb3pVVTJmb1FhRi9QcXpCYlFtZWdvYU95eTVYagozVUV4bUZyZXR4MFpGNU5WSk4vOWVhSTVkV0ZPbXh4aTBJT1BiNk9EaWxNanF1RG1FT0l5Y3Y0U2grL0lqOWZNCmdLS1dQN0lkbEM1Qk95OGR3MDlXZHJMcWhPVmNwSmpjcWIzeit4SEh3eUNOWHhoaEZvbW9sUFZ6SW55VFBCU2YKRG5IMG5LSUdReXZsaEIwa1RwR0s2MXNqa2ZxUyt4aTU5SXh1a2x2SEVzUHIxV25Uc2FPaGlYejd5UEpaK3EzQQoxZmhXMFVrUlpEWWdac0VtL2dOSjNycDhYWXVEZ2tpM2dFKzhJV0FkQVhxMXloakQ3UkpCOFRTSWE1dEhqSlFLCmpnQ2VIbkd6QWdNQkFBRUNnZ0VBZlVyQ1hrYTN0Q2JmZjNpcnp2cFFmZnVEbURNMzV0TmlYaDJTQVpSVW9FMFYKbSsvZ1UvdnIrN2s2eUgvdzhMOXhpZXFhQTljVkZkL0JuTlIrMzI2WGc2dEpCNko2ZGZxODJZdmZOZ0VDaUFMaQpqalNGemFlQmhnT3ZsWXZHbTR5OTU1Q0FGdjQ1cDNac1VsMTFDRXJlL1BGbGtaWHRHeGlrWFl6NC85UTgzblhZCnM2eDdPYTgyUjdwT2lraWh3Q0FvVTU3Rjc4ZWFKOG1xTmkwRlF2bHlxSk9QMTFCbVp4dm54ZU11S2poQjlPTnAKTFNwMWpzZXk5bDZNR2pVbjBGTG53RHZkVWRiK0ZlUEkxTjdWYUNBd3hJK3JHa3JTWkhnekhWWE92VUpON2t2QQpqNUZPNW9uNGgvK3hXbkYzM3lxZ0VvWWZ0MFFJL2pXS2NOV1d1a2pCd1FLQmdRRGVFNlJGRUpsT2Q1aVcxeW1qCm45RENnczVFbXFtRXN3WU95bkN3U2RhK1lNNnZVYmlac1k4WW9wMVRmVWN4cUh2NkFQWGpVd2NBUG1QVE9KRW8KMlJtS0xTYkhsTnc4bFNOMWJsWDBEL3Mzamc1R3VlVW9nbW5TVnhMa0h1OFhKR0o3VzFReEUzZG9IUHRrcTNpagpoa09QTnJpZFM0UmxqNTJwYkhscjUvQzRjUUtCZ1FENHhFYmpuck1heFV2b0xxVTRvT2xiOVc5UytSUllTc0cxCmxJUmgzNzZTV0ZuTTlSdGoyMTI0M1hkaE4zUFBtSTNNeiswYjdyMnZSUi9LMS9Cc1JUQnlrTi9kbkVuNVUxQkEKYm90cGZIS1Jvc1FUR1hIQkEvM0JrNC9qOWplU3RmVXgzZ2x3eUI0L2hORy9KM1ZVV2FXeURTRm5qZFEvcGJsRwp6VWlsSVBmK1l3S0JnUUNwMkdYYmVJMTN5TnBJQ3psS2JqRlFncEJWUWVDQ29CVHkvUHRncUtoM3BEeVBNN1kyCnZla09VMWgyQVN1UkhDWHRtQXgzRndvVXNxTFFhY1FEZEw4bXdjK1Y5eERWdU02TXdwMDBjNENVQmE1L2d5OXoKWXdLaUgzeFFRaVJrRTZ6S1laZ3JqSkxYYXNzT1BHS2cxbEFYV1NlckRaV3R3MEEyMHNLdXQ0NlEwUUtCZ0hGZQpxZHZVR0ZXcjhvTDJ0dzlPcmVyZHVJVTh4RnZVZmVFdHRRTVJ2N3pjRE5qT0gxUnJ4Wk9aUW0ySW92dkp6MTIyCnFKMWhPUXJtV3EzTHFXTCtTU3o4L3pqMG4vWERWVUIzNElzTFR2ODJDVnVXN2ZPRHlTSnVDRlpnZ0VVWkxZd3oKWDJRSm4xZGRSV1Z6S3hKczVJbDNXSERqL3dXZWxnaEJSOGtSZEZOM0FvR0FJNldDdjJQQ1lUS1ZZNjAwOFYwbgpyTDQ3YTlPanZ0Yy81S2ZxSjFpMkpKTUgyQi9jbU1WRSs4M2dpODFIU1FqMWErNnBjektmQVppZWcwRk9nL015ClB6VlZRYmpKTnY0QzM5KzdxSDg1WGdZTXZhcTJ0aDFEZWUvQ3NsMlM4QlV0cW5mc0VuMUYwcWhlWUJZb2RibHAKV3RUaE5oRi9oRVhzbkJROURyWkJKT1U9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
|
|
||||||
---
|
---
|
||||||
# same secret for longhorn-system namespace
|
# same secret for longhorn-system namespace
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
@ -21,48 +19,30 @@ type: Opaque
|
|||||||
data:
|
data:
|
||||||
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
|
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
|
||||||
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
|
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
|
||||||
AWS_ENDPOINTS: aHR0cHM6Ly9taW5pby1zZXJ2aWNlLmRlZmF1bHQ6OTAwMA== # https://minio-service.default:9000
|
AWS_ENDPOINTS: aHR0cDovL21pbmlvLXNlcnZpY2UuZGVmYXVsdDo5MDAw # http://minio-service.default:9000
|
||||||
AWS_CERT: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMRENDQWhTZ0F3SUJBZ0lSQU1kbzQycGhUZXlrMTcvYkxyWjVZRHN3RFFZSktvWklodmNOQVFFTEJRQXcKR2pFWU1CWUdBMVVFQ2hNUFRHOXVaMmh2Y200Z0xTQlVaWE4wTUNBWERUSXdNRFF5TnpJek1EQXhNVm9ZRHpJeApNakF3TkRBek1qTXdNREV4V2pBYU1SZ3dGZ1lEVlFRS0V3OU1iMjVuYUc5eWJpQXRJRlJsYzNRd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEWHpVdXJnUFpEZ3pUM0RZdWFlYmdld3Fvd2RlQUQKODRWWWF6ZlN1USs3K21Oa2lpUVBvelVVMmZvUWFGL1BxekJiUW1lZ29hT3l5NVhqM1VFeG1GcmV0eDBaRjVOVgpKTi85ZWFJNWRXRk9teHhpMElPUGI2T0RpbE1qcXVEbUVPSXljdjRTaCsvSWo5Zk1nS0tXUDdJZGxDNUJPeThkCncwOVdkckxxaE9WY3BKamNxYjN6K3hISHd5Q05YeGhoRm9tb2xQVnpJbnlUUEJTZkRuSDBuS0lHUXl2bGhCMGsKVHBHSzYxc2prZnFTK3hpNTlJeHVrbHZIRXNQcjFXblRzYU9oaVh6N3lQSlorcTNBMWZoVzBVa1JaRFlnWnNFbQovZ05KM3JwOFhZdURna2kzZ0UrOElXQWRBWHExeWhqRDdSSkI4VFNJYTV0SGpKUUtqZ0NlSG5HekFnTUJBQUdqCmF6QnBNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFQQmdOVkhSTUIKQWY4RUJUQURBUUgvTURFR0ExVWRFUVFxTUNpQ0NXeHZZMkZzYUc5emRJSVZiV2x1YVc4dGMyVnlkbWxqWlM1awpaV1poZFd4MGh3Ui9BQUFCTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDbUZMMzlNSHVZMzFhMTFEajRwMjVjCnFQRUM0RHZJUWozTk9kU0dWMmQrZjZzZ3pGejFXTDhWcnF2QjFCMVM2cjRKYjJQRXVJQkQ4NFlwVXJIT1JNU2MKd3ViTEppSEtEa0Jmb2U5QWI1cC9VakpyS0tuajM0RGx2c1cvR3AwWTZYc1BWaVdpVWorb1JLbUdWSTI0Q0JIdgpnK0JtVzNDeU5RR1RLajk0eE02czNBV2xHRW95YXFXUGU1eHllVWUzZjFBWkY5N3RDaklKUmVWbENtaENGK0JtCmFUY1RSUWN3cVdvQ3AwYmJZcHlERFlwUmxxOEdQbElFOW8yWjZBc05mTHJVcGFtZ3FYMmtYa2gxa3lzSlEralAKelFadHJSMG1tdHVyM0RuRW0yYmk0TktIQVFIcFc5TXUxNkdRakUxTmJYcVF0VEI4OGpLNzZjdEg5MzRDYWw2VgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t
|
|
||||||
---
|
---
|
||||||
apiVersion: apps/v1
|
apiVersion: v1
|
||||||
kind: Deployment
|
kind: Pod
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-test-minio
|
name: longhorn-test-minio
|
||||||
namespace: default
|
namespace: default
|
||||||
labels:
|
labels:
|
||||||
app: longhorn-test-minio
|
app: longhorn-test-minio
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-test-minio
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-test-minio
|
|
||||||
spec:
|
spec:
|
||||||
volumes:
|
volumes:
|
||||||
- name: minio-volume
|
- name: minio-volume
|
||||||
emptyDir: {}
|
emptyDir: {}
|
||||||
- name: minio-certificates
|
|
||||||
secret:
|
|
||||||
secretName: minio-secret
|
|
||||||
items:
|
|
||||||
- key: AWS_CERT
|
|
||||||
path: public.crt
|
|
||||||
- key: AWS_CERT_KEY
|
|
||||||
path: private.key
|
|
||||||
containers:
|
containers:
|
||||||
- name: minio
|
- name: minio
|
||||||
image: minio/minio:RELEASE.2022-02-01T18-00-14Z
|
image: minio/minio
|
||||||
command: ["sh", "-c", "mkdir -p /storage/backupbucket && mkdir -p /root/.minio/certs && ln -s /root/certs/private.key /root/.minio/certs/private.key && ln -s /root/certs/public.crt /root/.minio/certs/public.crt && exec minio server /storage"]
|
command: ["sh", "-c", "mkdir -p /storage/backupbucket && exec /usr/bin/minio server /storage"]
|
||||||
env:
|
env:
|
||||||
- name: MINIO_ROOT_USER
|
- name: MINIO_ACCESS_KEY
|
||||||
valueFrom:
|
valueFrom:
|
||||||
secretKeyRef:
|
secretKeyRef:
|
||||||
name: minio-secret
|
name: minio-secret
|
||||||
key: AWS_ACCESS_KEY_ID
|
key: AWS_ACCESS_KEY_ID
|
||||||
- name: MINIO_ROOT_PASSWORD
|
- name: MINIO_SECRET_KEY
|
||||||
valueFrom:
|
valueFrom:
|
||||||
secretKeyRef:
|
secretKeyRef:
|
||||||
name: minio-secret
|
name: minio-secret
|
||||||
@ -72,9 +52,6 @@ spec:
|
|||||||
volumeMounts:
|
volumeMounts:
|
||||||
- name: minio-volume
|
- name: minio-volume
|
||||||
mountPath: "/storage"
|
mountPath: "/storage"
|
||||||
- name: minio-certificates
|
|
||||||
mountPath: "/root/certs"
|
|
||||||
readOnly: true
|
|
||||||
---
|
---
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Service
|
kind: Service
|
||||||
|
@ -1,25 +1,17 @@
|
|||||||
apiVersion: apps/v1
|
apiVersion: v1
|
||||||
kind: Deployment
|
kind: Pod
|
||||||
metadata:
|
metadata:
|
||||||
name: longhorn-test-nfs
|
name: longhorn-test-nfs
|
||||||
namespace: default
|
namespace: default
|
||||||
labels:
|
labels:
|
||||||
app: longhorn-test-nfs
|
app: longhorn-test-nfs
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-test-nfs
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-test-nfs
|
|
||||||
spec:
|
spec:
|
||||||
volumes:
|
volumes:
|
||||||
- name: nfs-volume
|
- name: nfs-volume
|
||||||
emptyDir: {}
|
emptyDir: {}
|
||||||
containers:
|
containers:
|
||||||
- name: longhorn-test-nfs-container
|
- name: longhorn-test-nfs-container
|
||||||
image: longhornio/nfs-ganesha:latest
|
image: janeczku/nfs-ganesha:latest
|
||||||
imagePullPolicy: Always
|
imagePullPolicy: Always
|
||||||
env:
|
env:
|
||||||
- name: EXPORT_ID
|
- name: EXPORT_ID
|
||||||
@ -28,8 +20,6 @@ spec:
|
|||||||
value: /opt/backupstore
|
value: /opt/backupstore
|
||||||
- name: PSEUDO_PATH
|
- name: PSEUDO_PATH
|
||||||
value: /opt/backupstore
|
value: /opt/backupstore
|
||||||
- name: NFS_DISK_IMAGE_SIZE_MB
|
|
||||||
value: "4096"
|
|
||||||
command: ["bash", "-c", "chmod 700 /opt/backupstore && /opt/start_nfs.sh | tee /var/log/ganesha.log"]
|
command: ["bash", "-c", "chmod 700 /opt/backupstore && /opt/start_nfs.sh | tee /var/log/ganesha.log"]
|
||||||
securityContext:
|
securityContext:
|
||||||
privileged: true
|
privileged: true
|
||||||
@ -43,7 +33,6 @@ spec:
|
|||||||
command: ["bash", "-c", "grep \"No export entries found\" /var/log/ganesha.log > /dev/null 2>&1 ; [ $? -ne 0 ]"]
|
command: ["bash", "-c", "grep \"No export entries found\" /var/log/ganesha.log > /dev/null 2>&1 ; [ $? -ne 0 ]"]
|
||||||
initialDelaySeconds: 5
|
initialDelaySeconds: 5
|
||||||
periodSeconds: 5
|
periodSeconds: 5
|
||||||
timeoutSeconds: 4
|
|
||||||
---
|
---
|
||||||
kind: Service
|
kind: Service
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
@ -1,13 +0,0 @@
|
|||||||
longhornio/csi-attacher:v4.2.0
|
|
||||||
longhornio/csi-provisioner:v3.4.1
|
|
||||||
longhornio/csi-resizer:v1.7.0
|
|
||||||
longhornio/csi-snapshotter:v6.2.1
|
|
||||||
longhornio/csi-node-driver-registrar:v2.7.0
|
|
||||||
longhornio/livenessprobe:v2.9.0
|
|
||||||
longhornio/backing-image-manager:master-head
|
|
||||||
longhornio/longhorn-engine:master-head
|
|
||||||
longhornio/longhorn-instance-manager:master-head
|
|
||||||
longhornio/longhorn-manager:master-head
|
|
||||||
longhornio/longhorn-share-manager:master-head
|
|
||||||
longhornio/longhorn-ui:master-head
|
|
||||||
longhornio/support-bundle-kit:v0.0.27
|
|
4339
deploy/longhorn.yaml
4339
deploy/longhorn.yaml
File diff suppressed because it is too large
Load Diff
@ -1,61 +0,0 @@
|
|||||||
apiVersion: policy/v1beta1
|
|
||||||
kind: PodSecurityPolicy
|
|
||||||
metadata:
|
|
||||||
name: longhorn-psp
|
|
||||||
spec:
|
|
||||||
privileged: true
|
|
||||||
allowPrivilegeEscalation: true
|
|
||||||
requiredDropCapabilities:
|
|
||||||
- NET_RAW
|
|
||||||
allowedCapabilities:
|
|
||||||
- SYS_ADMIN
|
|
||||||
hostNetwork: false
|
|
||||||
hostIPC: false
|
|
||||||
hostPID: true
|
|
||||||
runAsUser:
|
|
||||||
rule: RunAsAny
|
|
||||||
seLinux:
|
|
||||||
rule: RunAsAny
|
|
||||||
fsGroup:
|
|
||||||
rule: RunAsAny
|
|
||||||
supplementalGroups:
|
|
||||||
rule: RunAsAny
|
|
||||||
volumes:
|
|
||||||
- configMap
|
|
||||||
- downwardAPI
|
|
||||||
- emptyDir
|
|
||||||
- secret
|
|
||||||
- projected
|
|
||||||
- hostPath
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: Role
|
|
||||||
metadata:
|
|
||||||
name: longhorn-psp-role
|
|
||||||
namespace: longhorn-system
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- policy
|
|
||||||
resources:
|
|
||||||
- podsecuritypolicies
|
|
||||||
verbs:
|
|
||||||
- use
|
|
||||||
resourceNames:
|
|
||||||
- longhorn-psp
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: RoleBinding
|
|
||||||
metadata:
|
|
||||||
name: longhorn-psp-binding
|
|
||||||
namespace: longhorn-system
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: Role
|
|
||||||
name: longhorn-psp-role
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: longhorn-service-account
|
|
||||||
namespace: longhorn-system
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: default
|
|
||||||
namespace: longhorn-system
|
|
@ -1,36 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: DaemonSet
|
|
||||||
metadata:
|
|
||||||
name: longhorn-cifs-installation
|
|
||||||
labels:
|
|
||||||
app: longhorn-cifs-installation
|
|
||||||
annotations:
|
|
||||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y cifs-utils; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y cifs-utils; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y cifs-utils; fi && if [ $? -eq 0 ]; then echo "cifs install successfully"; else echo "cifs utilities install failed error code $?"; fi
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-cifs-installation
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-cifs-installation
|
|
||||||
spec:
|
|
||||||
hostNetwork: true
|
|
||||||
hostPID: true
|
|
||||||
initContainers:
|
|
||||||
- name: cifs-installation
|
|
||||||
command:
|
|
||||||
- nsenter
|
|
||||||
- --mount=/proc/1/ns/mnt
|
|
||||||
- --
|
|
||||||
- bash
|
|
||||||
- -c
|
|
||||||
- *cmd
|
|
||||||
image: alpine:3.12
|
|
||||||
securityContext:
|
|
||||||
privileged: true
|
|
||||||
containers:
|
|
||||||
- name: sleep
|
|
||||||
image: registry.k8s.io/pause:3.1
|
|
||||||
updateStrategy:
|
|
||||||
type: RollingUpdate
|
|
@ -1,36 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: DaemonSet
|
|
||||||
metadata:
|
|
||||||
name: longhorn-iscsi-installation
|
|
||||||
labels:
|
|
||||||
app: longhorn-iscsi-installation
|
|
||||||
annotations:
|
|
||||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid && sudo modprobe iscsi_tcp; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid && sudo modprobe iscsi_tcp; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y iscsi-initiator-utils && echo "InitiatorName=$(/sbin/iscsi-iname)" > /etc/iscsi/initiatorname.iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid && sudo modprobe iscsi_tcp; fi && if [ $? -eq 0 ]; then echo "iscsi install successfully"; else echo "iscsi install failed error code $?"; fi
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-iscsi-installation
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-iscsi-installation
|
|
||||||
spec:
|
|
||||||
hostNetwork: true
|
|
||||||
hostPID: true
|
|
||||||
initContainers:
|
|
||||||
- name: iscsi-installation
|
|
||||||
command:
|
|
||||||
- nsenter
|
|
||||||
- --mount=/proc/1/ns/mnt
|
|
||||||
- --
|
|
||||||
- bash
|
|
||||||
- -c
|
|
||||||
- *cmd
|
|
||||||
image: alpine:3.17
|
|
||||||
securityContext:
|
|
||||||
privileged: true
|
|
||||||
containers:
|
|
||||||
- name: sleep
|
|
||||||
image: registry.k8s.io/pause:3.1
|
|
||||||
updateStrategy:
|
|
||||||
type: RollingUpdate
|
|
@ -1,35 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: DaemonSet
|
|
||||||
metadata:
|
|
||||||
name: longhorn-iscsi-selinux-workaround
|
|
||||||
labels:
|
|
||||||
app: longhorn-iscsi-selinux-workaround
|
|
||||||
annotations:
|
|
||||||
command: &cmd if ! rpm -q policycoreutils > /dev/null 2>&1; then echo "failed to apply workaround; only applicable in Fedora based distros with SELinux enabled"; exit; elif cd /tmp && echo '(allow iscsid_t self (capability (dac_override)))' > local_longhorn.cil && semodule -vi local_longhorn.cil && rm -f local_longhorn.cil; then echo "applied workaround successfully"; else echo "failed to apply workaround; error code $?"; fi
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-iscsi-selinux-workaround
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-iscsi-selinux-workaround
|
|
||||||
spec:
|
|
||||||
hostPID: true
|
|
||||||
initContainers:
|
|
||||||
- name: iscsi-selinux-workaround
|
|
||||||
command:
|
|
||||||
- nsenter
|
|
||||||
- --mount=/proc/1/ns/mnt
|
|
||||||
- --
|
|
||||||
- bash
|
|
||||||
- -c
|
|
||||||
- *cmd
|
|
||||||
image: alpine:3.17
|
|
||||||
securityContext:
|
|
||||||
privileged: true
|
|
||||||
containers:
|
|
||||||
- name: sleep
|
|
||||||
image: registry.k8s.io/pause:3.1
|
|
||||||
updateStrategy:
|
|
||||||
type: RollingUpdate
|
|
@ -1,36 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: DaemonSet
|
|
||||||
metadata:
|
|
||||||
name: longhorn-nfs-installation
|
|
||||||
labels:
|
|
||||||
app: longhorn-nfs-installation
|
|
||||||
annotations:
|
|
||||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y nfs-common && sudo modprobe nfs; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y nfs-client && sudo modprobe nfs; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y nfs-utils && sudo modprobe nfs; fi && if [ $? -eq 0 ]; then echo "nfs install successfully"; else echo "nfs install failed error code $?"; fi
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-nfs-installation
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-nfs-installation
|
|
||||||
spec:
|
|
||||||
hostNetwork: true
|
|
||||||
hostPID: true
|
|
||||||
initContainers:
|
|
||||||
- name: nfs-installation
|
|
||||||
command:
|
|
||||||
- nsenter
|
|
||||||
- --mount=/proc/1/ns/mnt
|
|
||||||
- --
|
|
||||||
- bash
|
|
||||||
- -c
|
|
||||||
- *cmd
|
|
||||||
image: alpine:3.12
|
|
||||||
securityContext:
|
|
||||||
privileged: true
|
|
||||||
containers:
|
|
||||||
- name: sleep
|
|
||||||
image: registry.k8s.io/pause:3.1
|
|
||||||
updateStrategy:
|
|
||||||
type: RollingUpdate
|
|
@ -1,36 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: DaemonSet
|
|
||||||
metadata:
|
|
||||||
name: longhorn-nvme-cli-installation
|
|
||||||
labels:
|
|
||||||
app: longhorn-nvme-cli-installation
|
|
||||||
annotations:
|
|
||||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y nvme-cli && sudo modprobe nvme-tcp; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y nvme-cli && sudo modprobe nvme-tcp; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y nvme-cli && sudo modprobe nvme-tcp; fi && if [ $? -eq 0 ]; then echo "nvme-cli install successfully"; else echo "nvme-cli install failed error code $?"; fi
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-nvme-cli-installation
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-nvme-cli-installation
|
|
||||||
spec:
|
|
||||||
hostNetwork: true
|
|
||||||
hostPID: true
|
|
||||||
initContainers:
|
|
||||||
- name: nvme-cli-installation
|
|
||||||
command:
|
|
||||||
- nsenter
|
|
||||||
- --mount=/proc/1/ns/mnt
|
|
||||||
- --
|
|
||||||
- bash
|
|
||||||
- -c
|
|
||||||
- *cmd
|
|
||||||
image: alpine:3.12
|
|
||||||
securityContext:
|
|
||||||
privileged: true
|
|
||||||
containers:
|
|
||||||
- name: sleep
|
|
||||||
image: registry.k8s.io/pause:3.1
|
|
||||||
updateStrategy:
|
|
||||||
type: RollingUpdate
|
|
@ -1,47 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: DaemonSet
|
|
||||||
metadata:
|
|
||||||
name: longhorn-spdk-setup
|
|
||||||
labels:
|
|
||||||
app: longhorn-spdk-setup
|
|
||||||
annotations:
|
|
||||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y git; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y git; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y git; fi && if [ $? -eq 0 ]; then echo "git install successfully"; else echo "git install failed error code $?"; fi && rm -rf ${SPDK_DIR}; git clone -b longhorn https://github.com/longhorn/spdk.git ${SPDK_DIR} && bash ${SPDK_DIR}/scripts/setup.sh ${SPDK_OPTION}; if [ $? -eq 0 ]; then echo "vm.nr_hugepages=$((HUGEMEM/2))" >> /etc/sysctl.conf; echo "SPDK environment is configured successfully"; else echo "Failed to configure SPDK environment error code $?"; fi; rm -rf ${SPDK_DIR}
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: longhorn-spdk-setup
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-spdk-setup
|
|
||||||
spec:
|
|
||||||
hostNetwork: true
|
|
||||||
hostPID: true
|
|
||||||
initContainers:
|
|
||||||
- name: longhorn-spdk-setup
|
|
||||||
command:
|
|
||||||
- nsenter
|
|
||||||
- --mount=/proc/1/ns/mnt
|
|
||||||
- --
|
|
||||||
- bash
|
|
||||||
- -c
|
|
||||||
- *cmd
|
|
||||||
image: alpine:3.12
|
|
||||||
env:
|
|
||||||
- name: SPDK_DIR
|
|
||||||
value: "/tmp/spdk"
|
|
||||||
- name: SPDK_OPTION
|
|
||||||
value: ""
|
|
||||||
- name: HUGEMEM
|
|
||||||
value: "1024"
|
|
||||||
- name: PCI_ALLOWED
|
|
||||||
value: "none"
|
|
||||||
- name: DRIVER_OVERRIDE
|
|
||||||
value: "uio_pci_generic"
|
|
||||||
securityContext:
|
|
||||||
privileged: true
|
|
||||||
containers:
|
|
||||||
- name: sleep
|
|
||||||
image: registry.k8s.io/pause:3.1
|
|
||||||
updateStrategy:
|
|
||||||
type: RollingUpdate
|
|
@ -1,7 +0,0 @@
|
|||||||
# Upgrade Responder Helm Chart
|
|
||||||
|
|
||||||
This directory contains the helm values for the Longhorn upgrade responder server.
|
|
||||||
The values are in the file `./chart-values.yaml`.
|
|
||||||
When you update the content of `./chart-values.yaml`, automation pipeline will update the Longhorn upgrade responder.
|
|
||||||
Information about the source chart is in `chart.yaml`.
|
|
||||||
See [dev/upgrade-responder](../../dev/upgrade-responder/README.md) for manual deployment steps.
|
|
@ -1,372 +0,0 @@
|
|||||||
# Specify the name of the application that is using this Upgrade Responder server
|
|
||||||
# This will be used to create a database named <application-name>_upgrade_responder
|
|
||||||
# in the InfluxDB to store all data for this Upgrade Responder
|
|
||||||
# The name must be in snake case format
|
|
||||||
applicationName: longhorn
|
|
||||||
|
|
||||||
image:
|
|
||||||
repository: longhornio/upgrade-responder
|
|
||||||
tag: longhorn-head
|
|
||||||
pullPolicy: Always
|
|
||||||
|
|
||||||
secret:
|
|
||||||
name: upgrade-responder-secret
|
|
||||||
# Set this to false if you don't want to manage these secrets with helm
|
|
||||||
managed: false
|
|
||||||
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
cpu: 400m
|
|
||||||
memory: 512Mi
|
|
||||||
requests:
|
|
||||||
cpu: 200m
|
|
||||||
memory: 256Mi
|
|
||||||
|
|
||||||
# This configmap contains information about the latest release
|
|
||||||
# of the application that is using this Upgrade Responder
|
|
||||||
configMap:
|
|
||||||
responseConfig: |-
|
|
||||||
{
|
|
||||||
"versions": [
|
|
||||||
{
|
|
||||||
"name": "v1.3.3",
|
|
||||||
"releaseDate": "2023-04-19T00:00:00Z",
|
|
||||||
"tags": [
|
|
||||||
"stable"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "v1.4.3",
|
|
||||||
"releaseDate": "2023-07-14T00:00:00Z",
|
|
||||||
"tags": [
|
|
||||||
"latest",
|
|
||||||
"stable"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "v1.5.1",
|
|
||||||
"releaseDate": "2023-07-19T00:00:00Z",
|
|
||||||
"tags": [
|
|
||||||
"latest"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
requestSchema: |-
|
|
||||||
{
|
|
||||||
"appVersionSchema": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"extraTagInfoSchema": {
|
|
||||||
"hostKernelRelease": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"hostOsDistro": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"kubernetesNodeProvider": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"kubernetesVersion": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAllowRecurringJobWhileVolumeDetached": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAllowVolumeCreationWithDegradedAvailability": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAutoCleanupSystemGeneratedSnapshot": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAutoDeletePodWhenVolumeDetachedUnexpectedly": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAutoSalvage": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingBackupCompressionMethod": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingBackupTarget": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingCrdApiVersion": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingCreateDefaultDiskLabeledNodes": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingDefaultDataLocality": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingDisableRevisionCounter": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingDisableSchedulingOnCordonedNode": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingFastReplicaRebuildEnabled": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingKubernetesClusterAutoscalerEnabled": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingNodeDownPodDeletionPolicy": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingNodeDrainPolicy": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingOfflineReplicaRebuilding": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingOrphanAutoDeletion": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingPriorityClass": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingRegistrySecret": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingRemoveSnapshotsDuringFilesystemTrim": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaAutoBalance": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaSoftAntiAffinity": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaZoneSoftAntiAffinity": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaDiskSoftAntiAffinity": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
}
|
|
||||||
"longhornSettingRestoreVolumeRecurringJobs": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSnapshotDataIntegrity": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSnapshotDataIntegrityCronjob": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingStorageNetwork": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSystemManagedComponentsNodeSelector": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSystemManagedPodsImagePullPolicy": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingTaintToleration": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingV2DataEngine": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"extraFieldInfoSchema": {
|
|
||||||
"longhornInstanceManagerAverageCpuUsageMilliCores": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornInstanceManagerAverageMemoryUsageBytes": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornManagerAverageCpuUsageMilliCores": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornManagerAverageMemoryUsageBytes": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornNamespaceUid": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornNodeCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornNodeDiskHDDCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornNodeDiskNVMeCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornNodeDiskSSDCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingBackingImageCleanupWaitInterval": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingBackingImageRecoveryWaitInterval": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingBackupConcurrentLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingBackupstorePollInterval": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingConcurrentAutomaticEngineUpgradePerNodeLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingConcurrentReplicaRebuildPerNodeLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingConcurrentVolumeBackupRestorePerNodeLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingDefaultReplicaCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingEngineReplicaTimeout": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingFailedBackupTtl": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingGuaranteedInstanceManagerCpu": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingRecurringFailedJobsHistoryLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingRecurringSuccessfulJobsHistoryLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaFileSyncHttpClientTimeout": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaReplenishmentWaitInterval": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingRestoreConcurrentLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingStorageMinimalAvailablePercentage": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingStorageOverProvisioningPercentage": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingStorageReservedPercentageForDefaultDisk": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingSupportBundleFailedHistoryLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAccessModeRwoCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAccessModeRwxCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAccessModeUnknownCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAverageActualSizeBytes": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAverageNumberOfReplicas": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAverageSizeBytes": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAverageSnapshotCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeDataLocalityBestEffortCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeDataLocalityDisabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeDataLocalityStrictLocalCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeFrontendBlockdevCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeFrontendIscsiCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeOfflineReplicaRebuildingDisabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeOfflineReplicaRebuildingEnabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeReplicaAutoBalanceDisabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeReplicaSoftAntiAffinityFalseCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeReplicaZoneSoftAntiAffinityTrueCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeReplicaDiskSoftAntiAffinityTrueCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeRestoreVolumeRecurringJobFalseCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeSnapshotDataIntegrityDisabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeSnapshotDataIntegrityFastCheckCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeUnmapMarkSnapChainRemovedFalseCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,5 +0,0 @@
|
|||||||
url: https://github.com/longhorn/upgrade-responder.git
|
|
||||||
commit: 116f807836c29185038cfb005708f0a8d41f4d35
|
|
||||||
releaseName: longhorn-upgrade-responder
|
|
||||||
namespace: longhorn-upgrade-responder
|
|
||||||
|
|
12
dev/scale-test/.gitignore
vendored
12
dev/scale-test/.gitignore
vendored
@ -1,12 +0,0 @@
|
|||||||
# ignores all goland project folders and files
|
|
||||||
.idea
|
|
||||||
*.iml
|
|
||||||
*.ipr
|
|
||||||
|
|
||||||
# ignore output folder
|
|
||||||
out
|
|
||||||
tmp
|
|
||||||
results
|
|
||||||
|
|
||||||
# ignore kubeconfig
|
|
||||||
kubeconfig
|
|
@ -1,27 +0,0 @@
|
|||||||
## Overview
|
|
||||||
scale-test is a collection of developer scripts that are used for scaling a cluster to a certain amount of volumes
|
|
||||||
while monitoring the time required to complete these actions.
|
|
||||||
`sample.sh` can be used to quickly see how long it takes for the requested amount of volumes to be up and usable.
|
|
||||||
`scale-test.py` can be used to create the amount of requested statefulsets based on the `statefulset.yaml` template,
|
|
||||||
as well as retrieve detailed timing information per volume.
|
|
||||||
|
|
||||||
|
|
||||||
### scale-test.py
|
|
||||||
scale-test.py watches `pod`, `pvc`, `va` events (ADDED, MODIFIED, DELETED).
|
|
||||||
Based on that information we can calculate the time of actions for each individual pod.
|
|
||||||
|
|
||||||
In additional scale-test.py can also be used to create a set of statefulset deployment files.
|
|
||||||
based on the `statefulset.yaml` with the following VARIABLES substituted based on the current sts index.
|
|
||||||
`@NODE_NAME@` - schedule each sts on a dedicated node
|
|
||||||
`@STS_NAME@` - also used for the volume-name
|
|
||||||
|
|
||||||
make sure to set the correct CONSTANT values in scale-test.py before running.
|
|
||||||
|
|
||||||
|
|
||||||
### sample.sh
|
|
||||||
sample.sh can be used to scale to a requested amount of volumes based on the existing statefulsets
|
|
||||||
and node count for the current cluster.
|
|
||||||
|
|
||||||
One can pass the requested amount of volumes as well as the node count of the current cluster.
|
|
||||||
example for 1000 volumes and 100 nodes: `./sample.sh 1000 100`
|
|
||||||
this expects there to be a statefulset deployment for each node.
|
|
@ -1,19 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
requested=${1:-0}
|
|
||||||
node_count=${2:-1}
|
|
||||||
required_scale=$((requested / node_count))
|
|
||||||
|
|
||||||
now=$(date)
|
|
||||||
ready=$(kubectl get pods -o custom-columns=NAMESPACE:metadata.namespace,POD:metadata.name,PodIP:status.podIP,READY:status.containerStatuses[*].ready | grep -c true)
|
|
||||||
echo "$ready -- $now - start state"
|
|
||||||
|
|
||||||
cmd=$(kubectl scale --replicas="$required_scale" statefulset --all)
|
|
||||||
echo "$cmd"
|
|
||||||
while [ "$ready" -ne "$requested" ]; do
|
|
||||||
sleep 60
|
|
||||||
now=$(date)
|
|
||||||
ready=$(kubectl get pods -o custom-columns=NAMESPACE:metadata.namespace,POD:metadata.name,PodIP:status.podIP,READY:status.containerStatuses[*].ready | grep -c true)
|
|
||||||
echo "$ready -- $now - delta:"
|
|
||||||
done
|
|
||||||
echo "$requested -- $now - done state"
|
|
@ -1,124 +0,0 @@
|
|||||||
import sys
|
|
||||||
import asyncio
|
|
||||||
import logging
|
|
||||||
from pathlib import Path
|
|
||||||
from kubernetes import client, config, watch
|
|
||||||
|
|
||||||
NAMESPACE = "default"
|
|
||||||
NODE_PREFIX = "jmoody-work"
|
|
||||||
NODE_COUNT = 100
|
|
||||||
TEMPLATE_FILE = "statefulset.yaml"
|
|
||||||
KUBE_CONFIG = None
|
|
||||||
KUBE_CONTEXT = None
|
|
||||||
# KUBE_CONFIG = "kubeconfig"
|
|
||||||
# KUBE_CONTEXT = "jmoody-test-jmoody-control2"
|
|
||||||
|
|
||||||
|
|
||||||
def create_sts_deployment(count):
|
|
||||||
# @NODE_NAME@ - schedule each sts on a dedicated node
|
|
||||||
# @STS_NAME@ - also used for the volume-name
|
|
||||||
# create 100 stateful-sets
|
|
||||||
for i in range(count):
|
|
||||||
create_sts_yaml(i + 1)
|
|
||||||
|
|
||||||
|
|
||||||
def create_sts_yaml(index):
|
|
||||||
content = Path(TEMPLATE_FILE).read_text()
|
|
||||||
content = content.replace("@NODE_NAME@", NODE_PREFIX + str(index))
|
|
||||||
content = content.replace("@STS_NAME@", "sts" + str(index))
|
|
||||||
file = Path("out/sts" + str(index) + ".yaml")
|
|
||||||
file.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
file.write_text(content)
|
|
||||||
|
|
||||||
|
|
||||||
async def watch_pods_async():
|
|
||||||
log = logging.getLogger('pod_events')
|
|
||||||
log.setLevel(logging.INFO)
|
|
||||||
v1 = client.CoreV1Api()
|
|
||||||
w = watch.Watch()
|
|
||||||
for event in w.stream(v1.list_namespaced_pod, namespace=NAMESPACE):
|
|
||||||
process_pod_event(log, event)
|
|
||||||
await asyncio.sleep(0)
|
|
||||||
|
|
||||||
|
|
||||||
def process_pod_event(log, event):
|
|
||||||
log.info("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))
|
|
||||||
if 'ADDED' in event['type']:
|
|
||||||
pass
|
|
||||||
elif 'DELETED' in event['type']:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
async def watch_pvc_async():
|
|
||||||
log = logging.getLogger('pvc_events')
|
|
||||||
log.setLevel(logging.INFO)
|
|
||||||
v1 = client.CoreV1Api()
|
|
||||||
w = watch.Watch()
|
|
||||||
for event in w.stream(v1.list_namespaced_persistent_volume_claim, namespace=NAMESPACE):
|
|
||||||
process_pvc_event(log, event)
|
|
||||||
await asyncio.sleep(0)
|
|
||||||
|
|
||||||
|
|
||||||
def process_pvc_event(log, event):
|
|
||||||
log.info("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))
|
|
||||||
if 'ADDED' in event['type']:
|
|
||||||
pass
|
|
||||||
elif 'DELETED' in event['type']:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
async def watch_va_async():
|
|
||||||
log = logging.getLogger('va_events')
|
|
||||||
log.setLevel(logging.INFO)
|
|
||||||
storage = client.StorageV1Api()
|
|
||||||
w = watch.Watch()
|
|
||||||
for event in w.stream(storage.list_volume_attachment):
|
|
||||||
process_va_event(log, event)
|
|
||||||
await asyncio.sleep(0)
|
|
||||||
|
|
||||||
|
|
||||||
def process_va_event(log, event):
|
|
||||||
log.info("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))
|
|
||||||
if 'ADDED' in event['type']:
|
|
||||||
pass
|
|
||||||
elif 'DELETED' in event['type']:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
# create the sts deployment files
|
|
||||||
create_sts_deployment(NODE_COUNT)
|
|
||||||
|
|
||||||
# setup the monitor
|
|
||||||
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
|
||||||
logging.basicConfig(stream=sys.stdout,
|
|
||||||
level=logging.INFO,
|
|
||||||
format=log_format)
|
|
||||||
config.load_kube_config(config_file=KUBE_CONFIG,
|
|
||||||
context=KUBE_CONTEXT)
|
|
||||||
logging.info("scale-test started")
|
|
||||||
|
|
||||||
# datastructures to keep track of the timings
|
|
||||||
# TODO: process events and keep track of the results
|
|
||||||
# results should be per pod/volume
|
|
||||||
# information to keep track: pod index per sts
|
|
||||||
# volume-creation time per pod
|
|
||||||
# volume-attach time per pod
|
|
||||||
# volume-detach time per pod
|
|
||||||
pvc_to_va_map = dict()
|
|
||||||
pvc_to_pod_map = dict()
|
|
||||||
results = dict()
|
|
||||||
|
|
||||||
# start async event_loop
|
|
||||||
event_loop = asyncio.get_event_loop()
|
|
||||||
event_loop.create_task(watch_pods_async())
|
|
||||||
event_loop.create_task(watch_pvc_async())
|
|
||||||
event_loop.create_task(watch_va_async())
|
|
||||||
event_loop.run_forever()
|
|
||||||
logging.info("scale-test-finished")
|
|
@ -1,41 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: StatefulSet
|
|
||||||
metadata:
|
|
||||||
name: @STS_NAME@
|
|
||||||
spec:
|
|
||||||
replicas: 0
|
|
||||||
serviceName: @STS_NAME@
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: @STS_NAME@
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: @STS_NAME@
|
|
||||||
spec:
|
|
||||||
nodeName: @NODE_NAME@
|
|
||||||
restartPolicy: Always
|
|
||||||
terminationGracePeriodSeconds: 10
|
|
||||||
containers:
|
|
||||||
- name: '@STS_NAME@'
|
|
||||||
image: 'busybox:latest'
|
|
||||||
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
|
|
||||||
livenessProbe:
|
|
||||||
exec:
|
|
||||||
command:
|
|
||||||
- ls
|
|
||||||
- /mnt/@STS_NAME@
|
|
||||||
initialDelaySeconds: 5
|
|
||||||
periodSeconds: 5
|
|
||||||
volumeMounts:
|
|
||||||
- name: @STS_NAME@
|
|
||||||
mountPath: /mnt/@STS_NAME@
|
|
||||||
volumeClaimTemplates:
|
|
||||||
- metadata:
|
|
||||||
name: @STS_NAME@
|
|
||||||
spec:
|
|
||||||
accessModes: [ "ReadWriteOnce" ]
|
|
||||||
storageClassName: "longhorn"
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 1Gi
|
|
@ -29,10 +29,8 @@ docker push ${private}
|
|||||||
escaped_private=${private//\//\\\/}
|
escaped_private=${private//\//\\\/}
|
||||||
sed -i "s/image\:\ .*\/${project}:.*/image\:\ ${escaped_private}/g" $yaml
|
sed -i "s/image\:\ .*\/${project}:.*/image\:\ ${escaped_private}/g" $yaml
|
||||||
sed -i "s/-\ .*\/${project}:.*/-\ ${escaped_private}/g" $yaml
|
sed -i "s/-\ .*\/${project}:.*/-\ ${escaped_private}/g" $yaml
|
||||||
sed -i "s/imagePullPolicy\:\ .*/imagePullPolicy\:\ Always/g" $yaml
|
|
||||||
sed -i "s/image\:\ .*\/${project}:.*/image\:\ ${escaped_private}/g" $driver_yaml
|
sed -i "s/image\:\ .*\/${project}:.*/image\:\ ${escaped_private}/g" $driver_yaml
|
||||||
sed -i "s/-\ .*\/${project}:.*/-\ ${escaped_private}/g" $driver_yaml
|
sed -i "s/-\ .*\/${project}:.*/-\ ${escaped_private}/g" $driver_yaml
|
||||||
sed -i "s/imagePullPolicy\:\ .*/imagePullPolicy\:\ Always/g" $driver_yaml
|
|
||||||
|
|
||||||
set +e
|
set +e
|
||||||
|
|
||||||
|
@ -1,24 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
NS=longhorn-system
|
|
||||||
KINDS="daemonset deployments"
|
|
||||||
|
|
||||||
function patch_kind {
|
|
||||||
kind=$1
|
|
||||||
list=$(kubectl -n $NS get $kind -o name)
|
|
||||||
for obj in $list
|
|
||||||
do
|
|
||||||
echo Updating $obj to imagePullPolicy: Always
|
|
||||||
name=${obj##*/}
|
|
||||||
kubectl -n $NS patch $obj -p '{"spec": {"template": {"spec":{"containers":[{"name":"'$name'","imagePullPolicy":"Always"}]}}}}'
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
for kind in $KINDS
|
|
||||||
do
|
|
||||||
patch_kind $kind
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "Warning: Make sure check and wait for all pods running again!"
|
|
||||||
echo "Current status: (CTRL-C to exit)"
|
|
||||||
kubectl get pods -w -n longhorn-system
|
|
@ -1,55 +0,0 @@
|
|||||||
## Overview
|
|
||||||
|
|
||||||
### Install
|
|
||||||
|
|
||||||
1. Install Longhorn.
|
|
||||||
1. Install Longhorn [upgrade-responder](https://github.com/longhorn/upgrade-responder) stack.
|
|
||||||
```bash
|
|
||||||
./install.sh
|
|
||||||
```
|
|
||||||
Sample output:
|
|
||||||
```shell
|
|
||||||
secret/influxdb-creds created
|
|
||||||
persistentvolumeclaim/influxdb created
|
|
||||||
deployment.apps/influxdb created
|
|
||||||
service/influxdb created
|
|
||||||
Deployment influxdb is running.
|
|
||||||
Cloning into 'upgrade-responder'...
|
|
||||||
remote: Enumerating objects: 1077, done.
|
|
||||||
remote: Counting objects: 100% (1076/1076), done.
|
|
||||||
remote: Compressing objects: 100% (454/454), done.
|
|
||||||
remote: Total 1077 (delta 573), reused 1049 (delta 565), pack-reused 1
|
|
||||||
Receiving objects: 100% (1077/1077), 55.01 MiB | 18.10 MiB/s, done.
|
|
||||||
Resolving deltas: 100% (573/573), done.
|
|
||||||
Release "longhorn-upgrade-responder" does not exist. Installing it now.
|
|
||||||
NAME: longhorn-upgrade-responder
|
|
||||||
LAST DEPLOYED: Thu May 11 00:42:44 2023
|
|
||||||
NAMESPACE: default
|
|
||||||
STATUS: deployed
|
|
||||||
REVISION: 1
|
|
||||||
TEST SUITE: None
|
|
||||||
NOTES:
|
|
||||||
1. Get the Upgrade Responder server URL by running these commands:
|
|
||||||
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=upgrade-responder,app.kubernetes.io/instance=longhorn-upgrade-responder" -o jsonpath="{.items[0].metadata.name}")
|
|
||||||
kubectl port-forward $POD_NAME 8080:8314 --namespace default
|
|
||||||
echo "Upgrade Responder server URL is http://127.0.0.1:8080"
|
|
||||||
Deployment longhorn-upgrade-responder is running.
|
|
||||||
persistentvolumeclaim/grafana-pvc created
|
|
||||||
deployment.apps/grafana created
|
|
||||||
service/grafana created
|
|
||||||
Deployment grafana is running.
|
|
||||||
|
|
||||||
[Upgrade Checker]
|
|
||||||
URL : http://longhorn-upgrade-responder.default.svc.cluster.local:8314/v1/checkupgrade
|
|
||||||
|
|
||||||
[InfluxDB]
|
|
||||||
URL : http://influxdb.default.svc.cluster.local:8086
|
|
||||||
Database : longhorn_upgrade_responder
|
|
||||||
Username : root
|
|
||||||
Password : root
|
|
||||||
|
|
||||||
[Grafana]
|
|
||||||
Dashboard : http://1.2.3.4:30864
|
|
||||||
Username : admin
|
|
||||||
Password : admin
|
|
||||||
```
|
|
@ -1,424 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
UPGRADE_RESPONDER_REPO="https://github.com/longhorn/upgrade-responder.git"
|
|
||||||
UPGRADE_RESPONDER_REPO_BRANCH="master"
|
|
||||||
UPGRADE_RESPONDER_VALUE_YAML="upgrade-responder-value.yaml"
|
|
||||||
UPGRADE_RESPONDER_IMAGE_REPO="longhornio/upgrade-responder"
|
|
||||||
UPGRADE_RESPONDER_IMAGE_TAG="master-head"
|
|
||||||
|
|
||||||
INFLUXDB_URL="http://influxdb.default.svc.cluster.local:8086"
|
|
||||||
|
|
||||||
APP_NAME="longhorn"
|
|
||||||
|
|
||||||
DEPLOYMENT_TIMEOUT_SEC=300
|
|
||||||
DEPLOYMENT_WAIT_INTERVAL_SEC=5
|
|
||||||
|
|
||||||
temp_dir=$(mktemp -d)
|
|
||||||
trap 'rm -rf "${temp_dir}"' EXIT # -f because packed Git files (.pack, .idx) are write protected.
|
|
||||||
|
|
||||||
cp -a ./* ${temp_dir}
|
|
||||||
cd ${temp_dir}
|
|
||||||
|
|
||||||
wait_for_deployment() {
|
|
||||||
local deployment_name="$1"
|
|
||||||
local start_time=$(date +%s)
|
|
||||||
|
|
||||||
while true; do
|
|
||||||
status=$(kubectl rollout status deployment/${deployment_name})
|
|
||||||
if [[ ${status} == *"successfully rolled out"* ]]; then
|
|
||||||
echo "Deployment ${deployment_name} is running."
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
|
|
||||||
elapsed_secs=$(($(date +%s) - ${start_time}))
|
|
||||||
if [[ ${elapsed_secs} -ge ${timeout_secs} ]]; then
|
|
||||||
echo "Timed out waiting for deployment ${deployment_name} to be running."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Deployment ${deployment_name} is not running yet. Waiting..."
|
|
||||||
sleep ${DEPLOYMENT_WAIT_INTERVAL_SEC}
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
install_influxdb() {
|
|
||||||
kubectl apply -f ./manifests/influxdb.yaml
|
|
||||||
wait_for_deployment "influxdb"
|
|
||||||
}
|
|
||||||
|
|
||||||
install_grafana() {
|
|
||||||
kubectl apply -f ./manifests/grafana.yaml
|
|
||||||
wait_for_deployment "grafana"
|
|
||||||
}
|
|
||||||
|
|
||||||
install_upgrade_responder() {
|
|
||||||
cat << EOF > ${UPGRADE_RESPONDER_VALUE_YAML}
|
|
||||||
applicationName: ${APP_NAME}
|
|
||||||
secret:
|
|
||||||
name: upgrade-responder-secrets
|
|
||||||
managed: true
|
|
||||||
influxDBUrl: "${INFLUXDB_URL}"
|
|
||||||
influxDBUser: "root"
|
|
||||||
influxDBPassword: "root"
|
|
||||||
configMap:
|
|
||||||
responseConfig: |-
|
|
||||||
{
|
|
||||||
"versions": [{
|
|
||||||
"name": "v1.0.0",
|
|
||||||
"releaseDate": "2020-05-18T12:30:00Z",
|
|
||||||
"tags": ["latest"]
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
requestSchema: |-
|
|
||||||
{
|
|
||||||
"appVersionSchema": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"extraTagInfoSchema": {
|
|
||||||
"hostKernelRelease": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"hostOsDistro": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"kubernetesNodeProvider": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"kubernetesVersion": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAllowRecurringJobWhileVolumeDetached": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAllowVolumeCreationWithDegradedAvailability": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAutoCleanupSystemGeneratedSnapshot": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAutoDeletePodWhenVolumeDetachedUnexpectedly": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingAutoSalvage": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingBackupCompressionMethod": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingBackupTarget": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingCrdApiVersion": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingCreateDefaultDiskLabeledNodes": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingDefaultDataLocality": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingDisableRevisionCounter": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingDisableSchedulingOnCordonedNode": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingFastReplicaRebuildEnabled": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingKubernetesClusterAutoscalerEnabled": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingNodeDownPodDeletionPolicy": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingNodeDrainPolicy": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingOfflineReplicaRebuilding": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingOrphanAutoDeletion": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingPriorityClass": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingRegistrySecret": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingRemoveSnapshotsDuringFilesystemTrim": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaAutoBalance": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaSoftAntiAffinity": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaZoneSoftAntiAffinity": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaDiskSoftAntiAffinity": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingRestoreVolumeRecurringJobs": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSnapshotDataIntegrity": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSnapshotDataIntegrityCronjob": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingStorageNetwork": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSystemManagedComponentsNodeSelector": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingSystemManagedPodsImagePullPolicy": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingTaintToleration": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornSettingV2DataEngine": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"extraFieldInfoSchema": {
|
|
||||||
"longhornInstanceManagerAverageCpuUsageMilliCores": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornInstanceManagerAverageMemoryUsageBytes": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornManagerAverageCpuUsageMilliCores": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornManagerAverageMemoryUsageBytes": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornNamespaceUid": {
|
|
||||||
"dataType": "string",
|
|
||||||
"maxLen": 200
|
|
||||||
},
|
|
||||||
"longhornNodeCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornNodeDiskHDDCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornNodeDiskNVMeCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornNodeDiskSSDCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingBackingImageCleanupWaitInterval": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingBackingImageRecoveryWaitInterval": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingBackupConcurrentLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingBackupstorePollInterval": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingConcurrentAutomaticEngineUpgradePerNodeLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingConcurrentReplicaRebuildPerNodeLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingConcurrentVolumeBackupRestorePerNodeLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingDefaultReplicaCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingEngineReplicaTimeout": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingFailedBackupTtl": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingGuaranteedInstanceManagerCpu": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingRecurringFailedJobsHistoryLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingRecurringSuccessfulJobsHistoryLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaFileSyncHttpClientTimeout": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingReplicaReplenishmentWaitInterval": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingRestoreConcurrentLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingStorageMinimalAvailablePercentage": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingStorageOverProvisioningPercentage": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingStorageReservedPercentageForDefaultDisk": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornSettingSupportBundleFailedHistoryLimit": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAccessModeRwoCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAccessModeRwxCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAccessModeUnknownCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAverageActualSizeBytes": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAverageNumberOfReplicas": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAverageSizeBytes": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeAverageSnapshotCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeDataLocalityBestEffortCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeDataLocalityDisabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeDataLocalityStrictLocalCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeFrontendBlockdevCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeFrontendIscsiCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeOfflineReplicaRebuildingDisabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeOfflineReplicaRebuildingEnabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeReplicaAutoBalanceDisabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeReplicaSoftAntiAffinityFalseCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeReplicaZoneSoftAntiAffinityTrueCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeReplicaDiskSoftAntiAffinityTrueCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeRestoreVolumeRecurringJobFalseCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeSnapshotDataIntegrityDisabledCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeSnapshotDataIntegrityFastCheckCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
},
|
|
||||||
"longhornVolumeUnmapMarkSnapChainRemovedFalseCount": {
|
|
||||||
"dataType": "float"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
image:
|
|
||||||
repository: ${UPGRADE_RESPONDER_IMAGE_REPO}
|
|
||||||
tag: ${UPGRADE_RESPONDER_IMAGE_TAG}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
git clone -b ${UPGRADE_RESPONDER_REPO_BRANCH} ${UPGRADE_RESPONDER_REPO}
|
|
||||||
helm upgrade --install ${APP_NAME}-upgrade-responder upgrade-responder/chart -f ${UPGRADE_RESPONDER_VALUE_YAML}
|
|
||||||
wait_for_deployment "${APP_NAME}-upgrade-responder"
|
|
||||||
}
|
|
||||||
|
|
||||||
output() {
|
|
||||||
local upgrade_responder_service_info=$(kubectl get svc/${APP_NAME}-upgrade-responder --no-headers)
|
|
||||||
local upgrade_responder_service_port=$(echo "${upgrade_responder_service_info}" | awk '{print $5}' | cut -d'/' -f1)
|
|
||||||
echo # a blank line to separate the installation outputs for better readability.
|
|
||||||
printf "[Upgrade Checker]\n"
|
|
||||||
printf "%-10s: http://${APP_NAME}-upgrade-responder.default.svc.cluster.local:${upgrade_responder_service_port}/v1/checkupgrade\n\n" "URL"
|
|
||||||
|
|
||||||
printf "[InfluxDB]\n"
|
|
||||||
printf "%-10s: ${INFLUXDB_URL}\n" "URL"
|
|
||||||
printf "%-10s: ${APP_NAME}_upgrade_responder\n" "Database"
|
|
||||||
printf "%-10s: root\n" "Username"
|
|
||||||
printf "%-10s: root\n\n" "Password"
|
|
||||||
|
|
||||||
local public_ip=$(curl -s https://ifconfig.me/ip)
|
|
||||||
local grafana_service_info=$(kubectl get svc/grafana --no-headers)
|
|
||||||
local grafana_service_port=$(echo "${grafana_service_info}" | awk '{print $5}' | cut -d':' -f2 | cut -d'/' -f1)
|
|
||||||
printf "[Grafana]\n"
|
|
||||||
printf "%-10s: http://${public_ip}:${grafana_service_port}\n" "Dashboard"
|
|
||||||
printf "%-10s: admin\n" "Username"
|
|
||||||
printf "%-10s: admin\n" "Password"
|
|
||||||
}
|
|
||||||
|
|
||||||
install_influxdb
|
|
||||||
install_upgrade_responder
|
|
||||||
install_grafana
|
|
||||||
output
|
|
@ -1,86 +0,0 @@
|
|||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
metadata:
|
|
||||||
name: grafana-pvc
|
|
||||||
spec:
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
storageClassName: longhorn
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 2Gi
|
|
||||||
---
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: grafana
|
|
||||||
name: grafana
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: grafana
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: grafana
|
|
||||||
spec:
|
|
||||||
securityContext:
|
|
||||||
fsGroup: 472
|
|
||||||
supplementalGroups:
|
|
||||||
- 0
|
|
||||||
containers:
|
|
||||||
- name: grafana
|
|
||||||
image: grafana/grafana:7.1.0
|
|
||||||
imagePullPolicy: IfNotPresent
|
|
||||||
env:
|
|
||||||
- name: GF_INSTALL_PLUGINS
|
|
||||||
value: "grafana-worldmap-panel"
|
|
||||||
ports:
|
|
||||||
- containerPort: 3000
|
|
||||||
name: http-grafana
|
|
||||||
protocol: TCP
|
|
||||||
readinessProbe:
|
|
||||||
failureThreshold: 3
|
|
||||||
httpGet:
|
|
||||||
path: /robots.txt
|
|
||||||
port: 3000
|
|
||||||
scheme: HTTP
|
|
||||||
initialDelaySeconds: 10
|
|
||||||
periodSeconds: 30
|
|
||||||
successThreshold: 1
|
|
||||||
timeoutSeconds: 2
|
|
||||||
livenessProbe:
|
|
||||||
failureThreshold: 3
|
|
||||||
initialDelaySeconds: 30
|
|
||||||
periodSeconds: 10
|
|
||||||
successThreshold: 1
|
|
||||||
tcpSocket:
|
|
||||||
port: 3000
|
|
||||||
timeoutSeconds: 1
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
cpu: 250m
|
|
||||||
memory: 750Mi
|
|
||||||
volumeMounts:
|
|
||||||
- mountPath: /var/lib/grafana
|
|
||||||
name: grafana-pv
|
|
||||||
volumes:
|
|
||||||
- name: grafana-pv
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: grafana-pvc
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: grafana
|
|
||||||
spec:
|
|
||||||
ports:
|
|
||||||
- port: 3000
|
|
||||||
protocol: TCP
|
|
||||||
targetPort: http-grafana
|
|
||||||
selector:
|
|
||||||
app: grafana
|
|
||||||
sessionAffinity: None
|
|
||||||
type: LoadBalancer
|
|
@ -1,90 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: influxdb-creds
|
|
||||||
namespace: default
|
|
||||||
type: Opaque
|
|
||||||
data:
|
|
||||||
INFLUXDB_HOST: aW5mbHV4ZGI= # influxdb
|
|
||||||
INFLUXDB_PASSWORD: cm9vdA== # root
|
|
||||||
INFLUXDB_USERNAME: cm9vdA== # root
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
metadata:
|
|
||||||
name: influxdb
|
|
||||||
namespace: default
|
|
||||||
labels:
|
|
||||||
app: influxdb
|
|
||||||
spec:
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
storageClassName: longhorn
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 2Gi
|
|
||||||
---
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: influxdb
|
|
||||||
name: influxdb
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
progressDeadlineSeconds: 600
|
|
||||||
replicas: 1
|
|
||||||
revisionHistoryLimit: 10
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: influxdb
|
|
||||||
strategy:
|
|
||||||
rollingUpdate:
|
|
||||||
maxSurge: 25%
|
|
||||||
maxUnavailable: 25%
|
|
||||||
type: RollingUpdate
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
creationTimestamp: null
|
|
||||||
labels:
|
|
||||||
app: influxdb
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- image: docker.io/influxdb:1.8.10
|
|
||||||
imagePullPolicy: IfNotPresent
|
|
||||||
name: influxdb
|
|
||||||
resources: {}
|
|
||||||
terminationMessagePath: /dev/termination-log
|
|
||||||
terminationMessagePolicy: File
|
|
||||||
envFrom:
|
|
||||||
- secretRef:
|
|
||||||
name: influxdb-creds
|
|
||||||
volumeMounts:
|
|
||||||
- mountPath: /var/lib/influxdb
|
|
||||||
name: var-lib-influxdb
|
|
||||||
volumes:
|
|
||||||
- name: var-lib-influxdb
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: influxdb
|
|
||||||
dnsPolicy: ClusterFirst
|
|
||||||
restartPolicy: Always
|
|
||||||
schedulerName: default-scheduler
|
|
||||||
securityContext: {}
|
|
||||||
terminationGracePeriodSeconds: 30
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: influxdb
|
|
||||||
name: influxdb
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
ports:
|
|
||||||
- port: 8086
|
|
||||||
protocol: TCP
|
|
||||||
targetPort: 8086
|
|
||||||
selector:
|
|
||||||
app: influxdb
|
|
||||||
sessionAffinity: None
|
|
||||||
type: ClusterIP
|
|
57
docs/chart.md
Normal file
57
docs/chart.md
Normal file
@ -0,0 +1,57 @@
|
|||||||
|
# Rancher Longhorn Chart
|
||||||
|
|
||||||
|
The following document pertains to running Longhorn from the Rancher 2.0 chart.
|
||||||
|
|
||||||
|
## Source Code
|
||||||
|
|
||||||
|
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
||||||
|
|
||||||
|
1. Longhorn Engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine
|
||||||
|
2. Longhorn Manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/rancher/longhorn-manager
|
||||||
|
3. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. Rancher v2.1+
|
||||||
|
2. Docker v1.13+
|
||||||
|
3. Kubernetes v1.8+ cluster with 1 or more nodes and Mount Propagation feature enabled. If your Kubernetes cluster was provisioned by Rancher v2.0.7+ or later, MountPropagation feature is enabled by default. [Check your Kubernetes environment now](https://github.com/rancher/longhorn#environment-check-script). If MountPropagation is disabled, the Kubernetes Flexvolume driver will be deployed instead of the default CSI driver. Base Image feature will also be disabled if MountPropagation is disabled.
|
||||||
|
4. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
|
||||||
|
5. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
|
||||||
|
|
||||||
|
## Uninstallation
|
||||||
|
|
||||||
|
1. To prevent damage to the Kubernetes cluster, we recommend deleting all Kubernetes workloads using Longhorn volumes (PersistentVolume, PersistentVolumeClaim, StorageClass, Deployment, StatefulSet, DaemonSet, etc).
|
||||||
|
|
||||||
|
2. From Rancher UI, navigate to `Catalog Apps` tab and delete Longhorn app.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### I deleted the Longhorn App from Rancher UI instead of following the uninstallation procedure
|
||||||
|
|
||||||
|
Redeploy the (same version) Longhorn App. Follow the uninstallation procedure above.
|
||||||
|
|
||||||
|
### Problems with CRDs
|
||||||
|
|
||||||
|
If your CRD instances or the CRDs themselves can't be deleted for whatever reason, run the commands below to clean up. Caution: this will wipe all Longhorn state!
|
||||||
|
|
||||||
|
```
|
||||||
|
# Delete CRD finalizers, instances and definitions
|
||||||
|
for crd in $(kubectl get crd -o jsonpath={.items[*].metadata.name} | tr ' ' '\n' | grep longhorn.rancher.io); do
|
||||||
|
kubectl -n ${NAMESPACE} get $crd -o yaml | sed "s/\- longhorn.rancher.io//g" | kubectl apply -f -
|
||||||
|
kubectl -n ${NAMESPACE} delete $crd --all
|
||||||
|
kubectl delete crd/$crd
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it
|
||||||
|
|
||||||
|
Check if volume plugin directory has been set correctly. This is automatically detected unless user explicitly set it.
|
||||||
|
|
||||||
|
By default, Kubernetes uses `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
|
||||||
|
|
||||||
|
Some vendors choose to change the directory for various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead.
|
||||||
|
|
||||||
|
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
|
||||||
|
|
||||||
|
---
|
||||||
|
Please see [link](https://github.com/rancher/longhorn) for more information.
|
53
docs/csi-config.md
Normal file
53
docs/csi-config.md
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
# Longhorn CSI on K3S
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
1. Kubernetes v1.11 or higher.
|
||||||
|
2. Longhorn v0.4.1 or higher.
|
||||||
|
|
||||||
|
|
||||||
|
## Instruction
|
||||||
|
#### K3S:
|
||||||
|
##### 1. For Longhorn v0.7.0 and above
|
||||||
|
Longhorn v0.7.0 and above support k3s v0.10.0 and above only by default.
|
||||||
|
|
||||||
|
If you want to deploy these new Longhorn versions on versions before k3s v0.10.0, you need to set `--kubelet-root-dir` to `<data-dir>/agent/kubelet` for the Deployment `longhorn-driver-deployer` in `longhorn/deploy/longhorn.yaml`.
|
||||||
|
`data-dir` is a `k3s` arg and it can be set when you launch a k3s server. By default it is `/var/lib/rancher/k3s`.
|
||||||
|
|
||||||
|
##### 2. For Longhorn before v0.7.0
|
||||||
|
Longhorn versions before v0.7.0 support k3s below v0.10.0 only by default.
|
||||||
|
|
||||||
|
If you want to deploy these older Longhorn versions on k3s v0.10.0 and above, you need to set `--kubelet-root-dir` to `/var/lib/kubelet` for the Deployment `longhorn-driver-deployer` in `longhorn/deploy/longhorn.yaml`
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
### Common issues
|
||||||
|
#### Failed to get arg root-dir: Cannot get kubelet root dir, no related proc for root-dir detection ...
|
||||||
|
|
||||||
|
This error is due to Longhorn cannot detect where is the root dir setup for Kubelet, so the CSI plugin installation failed.
|
||||||
|
|
||||||
|
User can override the root-dir detection by manually setting argument `kubelet-root-dir` here:
|
||||||
|
https://github.com/rancher/longhorn/blob/master/deploy/longhorn.yaml#L329
|
||||||
|
|
||||||
|
**For K3S v0.10.0-**
|
||||||
|
|
||||||
|
Run `ps aux | grep k3s` and get argument `--data-dir` or `-d` on k3s server node.
|
||||||
|
|
||||||
|
e.g.
|
||||||
|
```
|
||||||
|
$ ps uax | grep k3s
|
||||||
|
root 4160 0.0 0.0 51420 3948 pts/0 S+ 00:55 0:00 sudo /usr/local/bin/k3s server --data-dir /opt/test/k3s/data/dir
|
||||||
|
root 4161 49.0 4.0 259204 164292 pts/0 Sl+ 00:55 0:04 /usr/local/bin/k3s server --data-dir /opt/test/k3s/data/dir
|
||||||
|
```
|
||||||
|
You will find `data-dir` in the cmdline of proc `k3s`. By default it is not set and `/var/lib/rancher/k3s` will be used. Then joining `data-dir` with `/agent/kubelet` you will get the `root-dir`. So the default `root-dir` for K3S is `/var/lib/rancher/k3s/agent/kubelet`.
|
||||||
|
|
||||||
|
If K3S is using a configuration file, you would need to check the configuration file to locate the `data-dir` parameter.
|
||||||
|
|
||||||
|
**For K3S v0.10.0+**
|
||||||
|
|
||||||
|
It is always `/var/lib/kubelet`
|
||||||
|
|
||||||
|
## Background
|
||||||
|
#### Longhorn versions before v0.7.0 don't work on K3S v0.10.0 or above
|
||||||
|
K3S now sets its kubelet directory to `/var/lib/kubelet`. See [the K3S release comment](https://github.com/rancher/k3s/releases/tag/v0.10.0) for details.
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
https://github.com/kubernetes-csi/driver-registrar
|
91
docs/customized-default-setting.md
Normal file
91
docs/customized-default-setting.md
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
# Customized Default Setting
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
During Longhorn system deployment, users can customize the default settings for Longhorn. e.g. specify `Create Default Disk With Node Labeled` and `Default Data Path` before starting the Longhorn system.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
### Note:
|
||||||
|
1. This default setting is only for Longhorn system that hasn't been deployed. And it has no impact on the existing Longhorn system.
|
||||||
|
2. The users should modify the settings for an existing Longhorn system via UI.
|
||||||
|
|
||||||
|
### Via Rancher UI
|
||||||
|
[Cluster] -> System -> Apps -> Launch -> longhorn -> LONGHORN DEFAULT SETTINGS
|
||||||
|
|
||||||
|
|
||||||
|
### Via Longhorn deployment yaml file
|
||||||
|
1. Download the longhorn repo:
|
||||||
|
```
|
||||||
|
git clone https://github.com/longhorn/longhorn.git
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Modify the config map named `longhorn-default-setting` in the yaml file `longhorn/deploy/longhorn.yaml`. For example:
|
||||||
|
```
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: longhorn-default-setting
|
||||||
|
namespace: longhorn-system
|
||||||
|
data:
|
||||||
|
default-setting.yaml: |-
|
||||||
|
backup-target: s3://backupbucket@us-east-1/backupstore
|
||||||
|
backup-target-credential-secret: minio-secret
|
||||||
|
create-default-disk-labeled-nodes: true
|
||||||
|
default-data-path: /var/lib/rancher/longhorn-example/
|
||||||
|
replica-soft-anti-affinity: false
|
||||||
|
storage-over-provisioning-percentage: 600
|
||||||
|
storage-minimal-available-percentage: 15
|
||||||
|
upgrade-checker: false
|
||||||
|
default-replica-count: 2
|
||||||
|
guaranteed-engine-cpu:
|
||||||
|
default-longhorn-static-storage-class: longhorn-static-example
|
||||||
|
backupstore-poll-interval: 500
|
||||||
|
taint-toleration: key1=value1:NoSchedule; key2:NoExecute
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
### Via helm
|
||||||
|
1. Download the chart in the longhorn repo:
|
||||||
|
```
|
||||||
|
git clone https://github.com/longhorn/longhorn.git
|
||||||
|
```
|
||||||
|
|
||||||
|
2.1. Use helm command with `--set` flag to modify the default settings.
|
||||||
|
For example:
|
||||||
|
```
|
||||||
|
helm install ./longhorn/chart --name longhorn --namespace longhorn-system --set defaultSettings.taintToleration="key1=value1:NoSchedule; key2:NoExecute"
|
||||||
|
```
|
||||||
|
|
||||||
|
2.2. Or directly modifying the default settings in the yaml file `longhorn/chart/values.yaml` then using helm command without `--set` to deploy Longhorn.
|
||||||
|
For example:
|
||||||
|
|
||||||
|
In `longhorn/chart/values.yaml`:
|
||||||
|
```
|
||||||
|
defaultSettings:
|
||||||
|
backupTarget: s3://backupbucket@us-east-1/backupstore
|
||||||
|
backupTargetCredentialSecret: minio-secret
|
||||||
|
createDefaultDiskLabeledNodes: true
|
||||||
|
defaultDataPath: /var/lib/rancher/longhorn-example/
|
||||||
|
replicaSoftAntiAffinity: false
|
||||||
|
storageOverProvisioningPercentage: 600
|
||||||
|
storageMinimalAvailablePercentage: 15
|
||||||
|
upgradeChecker: false
|
||||||
|
defaultReplicaCount: 2
|
||||||
|
guaranteedEngineCPU:
|
||||||
|
defaultLonghornStaticStorageClass: longhorn-static-example
|
||||||
|
backupstorePollInterval: 500
|
||||||
|
taintToleration: key1=value1:NoSchedule; key2:NoExecute
|
||||||
|
```
|
||||||
|
|
||||||
|
Then use helm command:
|
||||||
|
```
|
||||||
|
helm install ./longhorn/chart --name longhorn --namespace longhorn-system
|
||||||
|
```
|
||||||
|
|
||||||
|
For more info about using helm, see:
|
||||||
|
[Install-Longhorn-with-helm](../README.md#install-longhorn-with-helm)
|
||||||
|
|
||||||
|
## History
|
||||||
|
[Original feature request](https://github.com/longhorn/longhorn/issues/623)
|
||||||
|
|
||||||
|
Available since v0.6.0
|
53
docs/dr-volume.md
Normal file
53
docs/dr-volume.md
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
# Disaster Recovery Volume
|
||||||
|
## What is Disaster Recovery Volume?
|
||||||
|
To increase the resiliency of the volume, Longhorn supports disaster recovery volume.
|
||||||
|
|
||||||
|
The disaster recovery volume is designed for the backup cluster in the case of the whole main cluster goes down.
|
||||||
|
A disaster recovery volume is normally in standby mode. User would need to activate it before using it as a normal volume.
|
||||||
|
A disaster recovery volume can be created from a volume's backup in the backup store. And Longhorn will monitor its
|
||||||
|
original backup volume and incrementally restore from the latest backup. Once the original volume in the main cluster goes
|
||||||
|
down and users decide to activate the disaster recovery volume in the backup cluster, the disaster recovery volume can be
|
||||||
|
activated immediately in the most condition, so it will greatly reduced the time needed to restore the data from the
|
||||||
|
backup store to the volume in the backup cluster.
|
||||||
|
|
||||||
|
## How to create Disaster Recovery Volume?
|
||||||
|
1. In the cluster A, make sure the original volume X has backup created or recurring backup scheduling.
|
||||||
|
2. Set backup target in cluster B to be same as cluster A's.
|
||||||
|
3. In backup page of cluster B, choose the backup volume X then create disaster recovery volume Y. It's highly recommended
|
||||||
|
to use backup volume name as disaster volume name.
|
||||||
|
4. Attach the disaster recovery volume Y to any node. Then Longhorn will automatically polling for the last backup of the
|
||||||
|
volume X, and incrementally restore it to the volume Y.
|
||||||
|
5. If volume X is down, users can activate volume Y immediately. Once activated, volume Y will become a
|
||||||
|
normal Longhorn volume.
|
||||||
|
5.1. Notice that deactivate a normal volume is not allowed.
|
||||||
|
|
||||||
|
## About Activating Disaster Recovery Volume
|
||||||
|
1. A disaster recovery volume doesn't support creating/deleting/reverting snapshot, creating backup, creating
|
||||||
|
PV/PVC. Users cannot update `Backup Target` in Settings if any disaster recovery volumes exist.
|
||||||
|
|
||||||
|
2. When users try to activate a disaster recovery volume, Longhorn will check the last backup of the original volume. If
|
||||||
|
it hasn't been restored, the restoration will be started, and the activate action will fail. Users need to wait for
|
||||||
|
the restoration to complete before retrying.
|
||||||
|
|
||||||
|
3. For disaster recovery volume, `Last Backup` indicates the most recent backup of its original backup volume. If the icon
|
||||||
|
representing disaster volume is gray, it means the volume is restoring `Last Backup` and users cannot activate this
|
||||||
|
volume right now; if the icon is blue, it means the volume has restored the `Last Backup`.
|
||||||
|
|
||||||
|
## RPO and RTO
|
||||||
|
Typically incremental restoration is triggered by the periodic backup store update. Users can set backup store update
|
||||||
|
interval in `Setting - General - Backupstore Poll Interval`. Notice that this interval can potentially impact
|
||||||
|
Recovery Time Objective(RTO). If it is too long, there may be a large amount of data for the disaster recovery volume to
|
||||||
|
restore, which will take a long time. As for Recovery Point Objective(RPO), it is determined by recurring backup
|
||||||
|
scheduling of the backup volume. You can check [here](snapshot-backup.md) to see how to set recurring backup in Longhorn.
|
||||||
|
|
||||||
|
e.g.:
|
||||||
|
|
||||||
|
If recurring backup scheduling for normal volume A is creating backup every hour, then RPO is 1 hour.
|
||||||
|
|
||||||
|
Assuming the volume creates backup every hour, and incrementally restoring data of one backup takes 5 minutes.
|
||||||
|
|
||||||
|
If `Backupstore Poll Interval` is 30 minutes, then there will be at most one backup worth of data since last restoration.
|
||||||
|
The time for restoring one backup is 5 minute, so RTO is 5 minutes.
|
||||||
|
|
||||||
|
If `Backupstore Poll Interval` is 12 hours, then there will be at most 12 backups worth of data since last restoration.
|
||||||
|
The time for restoring the backups is 5 * 12 = 60 minutes, so RTO is 60 minutes.
|
115
docs/driver.md
Normal file
115
docs/driver.md
Normal file
@ -0,0 +1,115 @@
|
|||||||
|
# Kubernetes driver
|
||||||
|
|
||||||
|
## Background
|
||||||
|
|
||||||
|
Longhorn can be used in Kubernetes to provide persistent storage through either Longhorn Container Storage Interface (CSI) driver or Longhorn FlexVolume driver. Longhorn will automatically deploy one of the drivers, depending on the Kubernetes cluster configuration. User can also specify the driver in the deployment yaml file. CSI is preferred.
|
||||||
|
|
||||||
|
Noted that the volume created and used through one driver won't be recongized by Kubernetes using the other driver. So please don't switch driver (e.g. during upgrade) if you have existing volumes created using the old driver. If you really want to switch driver, see [here](upgrade.md#migrating-between-flexvolume-and-csi-driver) for instructions.
|
||||||
|
|
||||||
|
## CSI
|
||||||
|
|
||||||
|
### Requirement for the CSI driver
|
||||||
|
|
||||||
|
1. Kubernetes v1.10+
|
||||||
|
1. CSI is in beta release for this version of Kubernetes, and enabled by default.
|
||||||
|
2. Mount propagation feature gate enabled.
|
||||||
|
1. It's enabled by default in Kubernetes v1.10. But some early versions of RKE may not enable it.
|
||||||
|
2. You can check it by using [environment check script](#environment-check-script).
|
||||||
|
3. If above conditions cannot be met, Longhorn will fall back to the FlexVolume driver.
|
||||||
|
|
||||||
|
### Check if your setup satisfied CSI requirement
|
||||||
|
1. Use the following command to check your Kubernetes server version
|
||||||
|
```
|
||||||
|
kubectl version
|
||||||
|
```
|
||||||
|
Result:
|
||||||
|
```
|
||||||
|
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||||
|
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||||
|
```
|
||||||
|
The `Server Version` should be `v1.10` or above.
|
||||||
|
|
||||||
|
2. The result of [environment check script](#environment-check-script) should contain `MountPropagation is enabled!`.
|
||||||
|
|
||||||
|
### Environment check script
|
||||||
|
|
||||||
|
We've wrote a script to help user to gather enough information about the factors
|
||||||
|
|
||||||
|
Before installing, run:
|
||||||
|
```
|
||||||
|
curl -sSfL https://raw.githubusercontent.com/rancher/longhorn/master/scripts/environment_check.sh | bash
|
||||||
|
```
|
||||||
|
Example result:
|
||||||
|
```
|
||||||
|
daemonset.apps/longhorn-environment-check created
|
||||||
|
waiting for pods to become ready (0/3)
|
||||||
|
all pods ready (3/3)
|
||||||
|
|
||||||
|
MountPropagation is enabled!
|
||||||
|
|
||||||
|
cleaning up...
|
||||||
|
daemonset.apps "longhorn-environment-check" deleted
|
||||||
|
clean up complete
|
||||||
|
```
|
||||||
|
|
||||||
|
### Successful CSI deployment example
|
||||||
|
```
|
||||||
|
$ kubectl -n longhorn-system get pod
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
csi-attacher-6fdc77c485-8wlpg 1/1 Running 0 9d
|
||||||
|
csi-attacher-6fdc77c485-psqlr 1/1 Running 0 9d
|
||||||
|
csi-attacher-6fdc77c485-wkn69 1/1 Running 0 9d
|
||||||
|
csi-provisioner-78f7db7d6d-rj9pr 1/1 Running 0 9d
|
||||||
|
csi-provisioner-78f7db7d6d-sgm6w 1/1 Running 0 9d
|
||||||
|
csi-provisioner-78f7db7d6d-vnjww 1/1 Running 0 9d
|
||||||
|
engine-image-ei-6e2b0e32-2p9nk 1/1 Running 0 9d
|
||||||
|
engine-image-ei-6e2b0e32-s8ggt 1/1 Running 0 9d
|
||||||
|
engine-image-ei-6e2b0e32-wgkj5 1/1 Running 0 9d
|
||||||
|
longhorn-csi-plugin-g8r4b 2/2 Running 0 9d
|
||||||
|
longhorn-csi-plugin-kbxrl 2/2 Running 0 9d
|
||||||
|
longhorn-csi-plugin-wv6sb 2/2 Running 0 9d
|
||||||
|
longhorn-driver-deployer-788984b49c-zzk7b 1/1 Running 0 9d
|
||||||
|
longhorn-manager-nr5rs 1/1 Running 0 9d
|
||||||
|
longhorn-manager-rd4k5 1/1 Running 0 9d
|
||||||
|
longhorn-manager-snb9t 1/1 Running 0 9d
|
||||||
|
longhorn-ui-67b9b6887f-n7x9q 1/1 Running 0 9d
|
||||||
|
```
|
||||||
|
|
||||||
|
For more information on CSI configuration, see [here](csi-config.md).
|
||||||
|
|
||||||
|
|
||||||
|
## Flexvolume
|
||||||
|
### Requirement for the FlexVolume driver
|
||||||
|
|
||||||
|
1. Kubernetes v1.8+
|
||||||
|
2. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in the every node of the Kubernetes cluster.
|
||||||
|
|
||||||
|
### Flexvolume driver directory
|
||||||
|
|
||||||
|
Longhorn now has ability to auto detect the location of Flexvolume directory.
|
||||||
|
|
||||||
|
If the Flexvolume driver wasn't installed correctly, there can be a few reasons:
|
||||||
|
1. If `kubelet` is running inside a container rather than running on the host OS, the host bind-mount path for the Flexvolume driver directory (`--volume-plugin-dir`) must be the same as the path used by the kubelet process.
|
||||||
|
1. For example, if the kubelet is using `/var/lib/kubelet/volumeplugins` as
|
||||||
|
the Flexvolume driver directory, then the host bind-mount must exist for that
|
||||||
|
directory, as e.g. `/var/lib/kubelet/volumeplugins:/var/lib/kubelet/volumeplugins` or any idential bind-mount for the parent directory.
|
||||||
|
2. It's because Longhorn would detect the directory used by the `kubelet` command line to decide where to install the driver on the host.
|
||||||
|
2. The kubelet setting for the Flexvolume driver directory must be the same across all the nodes.
|
||||||
|
1. Longhorn doesn't support heterogeneous setup at the moment.
|
||||||
|
|
||||||
|
### Successful Flexvolume deployment example
|
||||||
|
```
|
||||||
|
# kubectl -n longhorn-system get pod
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
engine-image-ei-57b85e25-8v65d 1/1 Running 0 7d
|
||||||
|
engine-image-ei-57b85e25-gjjs6 1/1 Running 0 7d
|
||||||
|
engine-image-ei-57b85e25-t2787 1/1 Running 0 7d
|
||||||
|
longhorn-driver-deployer-5469b87b9c-b9gm7 1/1 Running 0 2h
|
||||||
|
longhorn-flexvolume-driver-lth5g 1/1 Running 0 2h
|
||||||
|
longhorn-flexvolume-driver-tpqf7 1/1 Running 0 2h
|
||||||
|
longhorn-flexvolume-driver-v9mrj 1/1 Running 0 2h
|
||||||
|
longhorn-manager-7x8x8 1/1 Running 0 9h
|
||||||
|
longhorn-manager-8kqf4 1/1 Running 0 9h
|
||||||
|
longhorn-manager-kln4h 1/1 Running 0 9h
|
||||||
|
longhorn-ui-f849dcd85-cgkgg 1/1 Running 0 5d
|
||||||
|
```
|
101
docs/expansion.md
Normal file
101
docs/expansion.md
Normal file
@ -0,0 +1,101 @@
|
|||||||
|
# Volume Expansion
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
- Longhorn supports OFFLINE volume expansion only.
|
||||||
|
- Longhorn will expand frontend (e.g. block device) then expand filesystem.
|
||||||
|
|
||||||
|
## Prerequisite:
|
||||||
|
1. Longhorn version v0.8.0 or higher.
|
||||||
|
2. The volume to be expanded is state `detached`.
|
||||||
|
|
||||||
|
## Expand a Longhorn volume
|
||||||
|
There are two ways to expand a Longhorn volume:
|
||||||
|
|
||||||
|
#### Via PVC
|
||||||
|
- This method is applied only if:
|
||||||
|
1. Kubernetes version v1.16 or higher.
|
||||||
|
2. The PVC is dynamically provisioned by the Kubernetes with Longhorn StorageClass.
|
||||||
|
3. The field `allowVolumeExpansion` should be `true` in the related StorageClass.
|
||||||
|
- This method is recommended if it's applicable. Since the PVC and PV will be updated automatically and everything keeps consistent after expansion.
|
||||||
|
- Usage: Find the corresponding PVC for Longhorn volume then modify requested `storage` of the PVC spec. e.g.,
|
||||||
|
```
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
kubectl.kubernetes.io/last-applied-configuration: |
|
||||||
|
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"longhorn-simple-pvc","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"longhorn"}}
|
||||||
|
pv.kubernetes.io/bind-completed: "yes"
|
||||||
|
pv.kubernetes.io/bound-by-controller: "yes"
|
||||||
|
volume.beta.kubernetes.io/storage-provisioner: driver.longhorn.io
|
||||||
|
creationTimestamp: "2019-12-21T01:36:16Z"
|
||||||
|
finalizers:
|
||||||
|
- kubernetes.io/pvc-protection
|
||||||
|
name: longhorn-simple-pvc
|
||||||
|
namespace: default
|
||||||
|
resourceVersion: "162431"
|
||||||
|
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/longhorn-simple-pvc
|
||||||
|
uid: 0467ae73-22a5-4eba-803e-464cc0b9d975
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
|
storageClassName: longhorn
|
||||||
|
volumeMode: Filesystem
|
||||||
|
volumeName: pvc-0467ae73-22a5-4eba-803e-464cc0b9d975
|
||||||
|
status:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
capacity:
|
||||||
|
storage: 1Gi
|
||||||
|
phase: Bound
|
||||||
|
```
|
||||||
|
Modify `spec.resources.requests.storage` of this PVC.
|
||||||
|
|
||||||
|
|
||||||
|
#### Via Longhorn UI
|
||||||
|
- If your Kubernetes version is v1.14 or v1.15, this method is the only choice for Longhorn volume expansion.
|
||||||
|
- Notice that The volume size will be updated after the expansion but the capacity of corresponding PVC and PV won't change. Users need to take care of them.
|
||||||
|
- Usage: On the volume page of Longhorn UI, click `Expand` for the volume.
|
||||||
|
|
||||||
|
|
||||||
|
## Frontend expansion
|
||||||
|
- To prevent the frontend expansion from being interfered by unexpected data R/W, Longhorn supports OFFLINE expansion only.
|
||||||
|
The `detached` volume will be automatically attached to a random node with maintenance mode.
|
||||||
|
- Rebuilding/adding replicas is not allowed during the expansion and vice versa.
|
||||||
|
|
||||||
|
|
||||||
|
## Filesystem expansion
|
||||||
|
#### Longhorn will try to expand the file system only if:
|
||||||
|
1. The expanded size should be greater than the current size.
|
||||||
|
2. There is a Linux filesystem in the Longhorn volume.
|
||||||
|
3. The filesystem used in the Longhorn volume is one of the followings:
|
||||||
|
1. ext4
|
||||||
|
2. XFS
|
||||||
|
4. The Longhorn volume is using block device frontend.
|
||||||
|
|
||||||
|
#### Handling volume revert:
|
||||||
|
If users revert a volume to a snapshot with smaller size, the frontend of the volume is still holding the expanded size. But the filesystem size will be the same as that of the reverted snapshot. In this case, users need to handle the filesystem manually:
|
||||||
|
1. Attach the volume to a random nodes.
|
||||||
|
2. Log into the corresponding node, expand the filesystem:
|
||||||
|
- If the filesystem is `ext4`, the volume might need to be mounted and umounted once before resizing the filesystem manually. Otherwise, executing `resize2fs` might result in an error:
|
||||||
|
```
|
||||||
|
resize2fs: Superblock checksum does not match superblock while trying to open ......
|
||||||
|
Couldn't find valid filesystem superblock.
|
||||||
|
```
|
||||||
|
Follow the steps below to resize the filesystem:
|
||||||
|
```
|
||||||
|
mount /dev/longhorn/<volume name> <arbitrary mount directory>
|
||||||
|
umount /dev/longhorn/<volume name>
|
||||||
|
mount /dev/longhorn/<volume name> <arbitrary mount directory>
|
||||||
|
resize2fs /dev/longhorn/<volume name>
|
||||||
|
umount /dev/longhorn/<volume name>
|
||||||
|
```
|
||||||
|
- If the filesystem is `xfs`, users can directly mount then expand the filesystem.
|
||||||
|
```
|
||||||
|
mount /dev/longhorn/<volume name> <arbitrary mount directory>
|
||||||
|
xfs_growfs <the mount directory>
|
||||||
|
umount /dev/longhorn/<volume name>
|
||||||
|
```
|
12
docs/gke.md
Normal file
12
docs/gke.md
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
# Google Kubernetes Engine
|
||||||
|
|
||||||
|
1. GKE clusters must use `Ubuntu` OS instead of `Container-Optimized` OS, in order to satisfy Longhorn `open-iscsi` dependency.
|
||||||
|
|
||||||
|
2. GKE requires user to manually claim himself as cluster admin to enable RBAC. Before installing Longhorn, run the following command:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
where `name@example.com` is the user's account name in GCE, and it's case sensitive. See [this document](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for more information.
|
24
docs/iscsi.md
Normal file
24
docs/iscsi.md
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
# iSCSI support
|
||||||
|
|
||||||
|
Longhorn supports iSCSI target frontend mode. The user can connect to it
|
||||||
|
through any iSCSI client, including open-iscsi, and virtual machine
|
||||||
|
hypervisor like KVM, as long as it's in the same network with the Longhorn system.
|
||||||
|
|
||||||
|
Longhorn Driver (CSI/Flexvolume) doesn't support iSCSI mode.
|
||||||
|
|
||||||
|
To start volume with iSCSI target frontend mode, select `iSCSI` as the frontend
|
||||||
|
when creating the volume. After volume has been attached, the user will see
|
||||||
|
something like following in the `endpoint` field:
|
||||||
|
|
||||||
|
```
|
||||||
|
iscsi://10.42.0.21:3260/iqn.2014-09.com.rancher:testvolume/1
|
||||||
|
```
|
||||||
|
|
||||||
|
Here:
|
||||||
|
1. The IP and port is `10.42.0.21:3260`.
|
||||||
|
2. The target name is `iqn.2014-09.com.rancher:testvolume`. `testvolume` is the
|
||||||
|
name of the volume.
|
||||||
|
3. The LUN number is 1. Longhorn always uses LUN 1.
|
||||||
|
|
||||||
|
Then user can use above information to connect to the iSCSI target provided by
|
||||||
|
Longhorn using an iSCSI client.
|
39
docs/k8s-workload.md
Normal file
39
docs/k8s-workload.md
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
# Workload identification for volume
|
||||||
|
Now users can identify current workloads or workload history for existing Longhorn volumes.
|
||||||
|
```
|
||||||
|
PV Name: test1-pv
|
||||||
|
PV Status: Bound
|
||||||
|
|
||||||
|
Namespace: default
|
||||||
|
PVC Name: test1-pvc
|
||||||
|
|
||||||
|
Last Pod Name: volume-test-1
|
||||||
|
Last Pod Status: Running
|
||||||
|
Last Workload Name: volume-test
|
||||||
|
Last Workload Type: Statefulset
|
||||||
|
Last time used by Pod: a few seconds ago
|
||||||
|
```
|
||||||
|
|
||||||
|
## About historical status
|
||||||
|
There are a few fields can contain the historical status instead of the current status.
|
||||||
|
Those fields can be used to help users figuring out which workload has used the volume in the past:
|
||||||
|
|
||||||
|
1. `Last time bound with PVC`: If this field is set, it indicates currently there is no bounded PVC for this volume.
|
||||||
|
The related fields will show the most recent bounded PVC.
|
||||||
|
2. `Last time used by Pod`: If these fields are set, they indicates currently there is no workload using this volume.
|
||||||
|
The related fields will show the most recent workload using this volume.
|
||||||
|
|
||||||
|
# PV/PVC creation for existing Longhorn volume
|
||||||
|
Now users can create PV/PVC via our Longhorn UI for the existing Longhorn volumes.
|
||||||
|
Only detached volume can be used by newly created pod.
|
||||||
|
|
||||||
|
## About special fields of PV/PVC
|
||||||
|
Since the Longhorn volume already exists while creating PV/PVC, StorageClass is not needed for dynamically provisioning
|
||||||
|
Longhorn volume. However, the field `storageClassName` would be set in PVC/PV, to be used for PVC bounding purpose. And
|
||||||
|
it's unnecessary for users create the related StorageClass object.
|
||||||
|
|
||||||
|
By default the StorageClass for Longhorn created PV/PVC is `longhorn-static`. Users can modified it in
|
||||||
|
`Setting - General - Default Longhorn Static StorageClass Name` as they need.
|
||||||
|
|
||||||
|
Users need to manually delete PVC and PV created by Longhorn.
|
||||||
|
|
49
docs/longhorn-ingress.md
Normal file
49
docs/longhorn-ingress.md
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
## Create Nginx Ingress Controller with basic authentication
|
||||||
|
|
||||||
|
1. Create a basic auth file `auth`:
|
||||||
|
> It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503
|
||||||
|
|
||||||
|
`$ USER=<USERNAME_HERE>; PASSWORD=<PASSWORD_HERE>; echo "${USER}:$(openssl passwd -stdin -apr1 <<< ${PASSWORD})" >> auth`
|
||||||
|
|
||||||
|
2. Create a secret
|
||||||
|
|
||||||
|
`$ kubectl -n longhorn-system create secret generic basic-auth --from-file=auth`
|
||||||
|
|
||||||
|
3. Create an Nginx ingress controller manifest `longhorn-ingress.yml` :
|
||||||
|
|
||||||
|
```
|
||||||
|
apiVersion: networking.k8s.io/v1beta1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: longhorn-ingress
|
||||||
|
namespace: longhorn-system
|
||||||
|
annotations:
|
||||||
|
# type of authentication
|
||||||
|
nginx.ingress.kubernetes.io/auth-type: basic
|
||||||
|
# name of the secret that contains the user/password definitions
|
||||||
|
nginx.ingress.kubernetes.io/auth-secret: basic-auth
|
||||||
|
# message to display with an appropriate context why the authentication is required
|
||||||
|
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required '
|
||||||
|
spec:
|
||||||
|
rules:
|
||||||
|
- http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
backend:
|
||||||
|
serviceName: longhorn-frontend
|
||||||
|
servicePort: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Create the ingress controller:
|
||||||
|
`$ kubectl -n longhorn-system apply longhorn-ingress.yml`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#### For AWS EKS clusters:
|
||||||
|
User need to create an ELB to expose nginx ingress controller to the internet. (additional cost may apply)
|
||||||
|
|
||||||
|
1. Create pre-requisite resources:
|
||||||
|
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#prerequisite-generic-deployment-command
|
||||||
|
|
||||||
|
2. Create ELB:
|
||||||
|
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#aws
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user