遇到问题请加群咨询,不要评论了。群链接在关于我里面
应用安装排错流程
首先查看事件
(点击安装的应用名,点击事件查看)
一般拉取镜像失败,权限设置错误等错误会这里显示
其次查看日志
(点击应用名称旁边的三个点,点击日志)
这里就是应用的全部日志,一般应用的权限设置错误,应用本身的错误会在这里显示。
常见错误
nil pointer evaluating interface {}.mode
[EFAULT] Failed to install chart release: Error: INSTALLATION FAILED: template: APPNAME/templates/common.yaml:1:3: executing "APPNAME/templates/common.yaml" at : error calling include: template: APPNAME/charts/common/templates/loader/_all.tpl:6:6: executing "tc.v1.common.loader.all" at : error calling include: template: APPNAME/charts/common/templates/loader/_apply.tpl:47:6: executing "tc.v1.common.loader.apply" at : error calling include: template: APPNAME/charts/common/templates/spawner/_pvc.tpl:25:10: executing "tc.v1.common.spawner.pvc" at : error calling include: template: APPNAME/charts/common/templates/lib/storage/_validation.tpl:18:43: executing "tc.v1.common.lib.persistence.validation" at <$objectData.static.mode>: nil pointer evaluating interface {}.mode
The issue: This error is due to old version of Helm. Helm > 3.9.4 is required.
The solution: Upgrade to TrueNAS SCALE Cobia (23.10.x) or newer. System Settings -> Update -> Select Cobia from the dropdown. SCALE Bluefin and Angelfish releases are no longer supported.
cannot patch "APPNAME-redis" with kind StatefulSet
[EFAULT] Failed to update App: Error: UPGRADE FAILED: cannot patch "APPNAME-redis" with kind StatefulSet: StatefulSet.apps "APPNAME-redis" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
The solution: Check which apps have statefulsets by running:
k3s kubectl get statefulsets -A | grep "ix-"
Then, to delete the statefulset:
k3s kubectl delete statefulset STATEFULSETNAME -n ix-APPNAME
Example:
k3s kubectl delete statefulset nextcloud-redis -n ix-nextcloud
Once deleted you can attempt the update (or if you were already updated to latest versions, then edit and save without any changes).
Operator-Related Errors
service "cnpg-webhook-service" not found
[EFAULT] Failed to update App: Error: UPGRADE FAILED: cannot patch "APPNAME-cnpg-main" with kind Cluster: Internal error occurred: failed calling webhook "mcluster.cnpg.io": failed to call webhook: Post "https://cnpg-webhook-service.ix-cloudnative-pg.svc/mutate-postgresql-cnpg-io-v1-cluster?timeout=10s": service "cnpg-webhook-service" not found
The solution:
- Enter the following command
k3s kubectl delete deployment.apps/cloudnative-pg --namespace ix-cloudnative-pg
- Update
Cloudnative-PG
to the latest version, or if you already on the latest version, editcCloudnative-PG
and save/update it again without any changes. - If the app remains stopped, hit the start button in the UI for
Cloudnative-PG
.
"monitoring.coreos.com/v1" ensure CRDs are installed first
[EFAULT] Failed to update App: Error: UPGRADE FAILED: unable to build kubernetes objects from current release manifest: resource mapping not found for name: "APPNAME" namespace: "ix-APPNAME" from "": no matches for kind "PodMonitor" in version "monitoring.coreos.com/v1" ensure CRDs are installed first
The solution:
- Install
Prometheus-Operator
first, then go back and install the app you were trying to install - If you see this error with Prometheus-Operator already installed, delete it and reinstall
- While deleting Prometheus-{o}perator, if you encounter the error:
Error: [EFAULT] Unable to uninstall 'prometheus-operator' chart release: b'Error: failed to delete release: prometheus-operator\n'
Run the following command from the TrueNAS SCALE shell as root:
k3s kubectl delete namespace ix-prometheus-operator
Then install Prometheus-Operator
again. It will fail on the first install attempt, but the second time it will work.
Rendered manifests contain a resource that already exists
certificaterequests.cert-manager.io
[EFAULT] Failed to install App: Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "certificaterequests.cert-manager.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "cert-manager"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ix-cert-manager"
The solution: The Cert-Manager operator is required for the use of Cert-Manager and Clusterissuer to issue certificates for chart ingress.
To remove the previous automatically installed operator run this in the system shell as root :
k3s kubectl delete --grace-period 30 --v=4 -k https://github.com/truecharts/manifests/delete4
https://truecharts.org/manual/FAQ#cert-manager
backups.postgresql.cnpg.io
[EFAULT] Failed to install App: Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "backups.postgresql.cnpg.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "cloudnative-pg"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ix-cloudnative-pg"
The solution: The Cloudnative-PG operator is required for the use of any charts that utilize CloudNative Postgresql (CNPG).
DATA LOSS
The following command is destructive and will delete any existing CNPG databases.
Run the following command in system shell as root to see if you have any current CNPG databases to migrate:
k3s kubectl get cluster -A
Follow this guide to safely migrate any existing CNPG databases.
To remove the previous automatically installed operator run this in the system shell as root :
k3s kubectl delete --grace-period 30 --v=4 -k https://github.com/truecharts/manifests/delete2
https://truecharts.org/manual/FAQ#cloudnative-pg
addresspools.metallb.io
[EFAULT] Failed to install App: Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "addresspools.metallb.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "metallb"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ix-metallb"
The solution: The Metallb operator is required for the use of MetalLB to have each chart utilize a unique IP address.
LOSS OF CONNECTIVITY
Installing the MetalLB operator will prevent the use of the TrueNAS Scale integrated load balancer. Only install this operator if you intend to use MetalLB.
To remove the previous automatically installed operator run this in the system shell as root :
k3s kubectl delete --grace-period 30 --v=4 -k https://github.com/truecharts/manifests/delete
https://truecharts.org/manual/FAQ#metallb
alertmanagerconfigs.monitoring.coreos.com
[EFAULT] Failed to install chart release: Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "alertmanagerconfigs.monitoring.coreos.com" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "prometheus-operator"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ix-prometheus-operator"
The solution: The Prometheus-operator is required for the use of Prometheus metrics and for any charts that utilize CloudNative Postgresql (CNPG).
To remove the previous automatically installed operator run this in the system shell as root :
k3s kubectl delete --grace-period 30 --v=4 -k https://github.com/truecharts/manifests/delete3
https://truecharts.org/manual/FAQ#prometheus-operator
Operator [traefik] has to be installed first
Failed to install App: Operator [traefik] has to be installed first
The solution: If this error appears while installing Traefik, install Traefik with its own ingress disabled first. Once it's installed you can enable ingress for traefik.
Operator [cloudnative-pg] has to be installed first
Failed to install App: Operator [cloudnative-pg] has to be installed first
The solution: Install Cloudnative-PG
.
TIP
Ensure the system train is enabled in the Truecharts catalog under Apps -> Discover Apps -> Manage Catalogs.
Operator [Prometheus-operator] has to be installed first
Failed to install App: Operator [rometheus-operator] has to be installed first
The solution: Install Prometheus-operator
.
TIP
Ensure the system train is enabled in the Truecharts catalog under Apps -> Discover Apps -> Manage Catalogs.
Can't upgrade between ghcr.io/cloudnative-pg/postgresql
[EFAULT] Failed to update App: Error: UPGRADE FAILED: cannot patch "APPNAME-cnpg-main" with kind Cluster: admission webhook "vcluster.cnpg.io" denied the request: Cluster.cluster.cnpg.io "APPNAME-cnpg-main" is invalid: spec.imageName: Invalid value: "ghcr.io/cloudnative-pg/postgresql:16.2": can't upgrade between ghcr.io/cloudnative-pg/postgresql:15.2 and ghcr.io/cloudnative-pg/postgresql:16.2
The solution: run this in the system shell as root , replacing APPNAME
with the name of your CNPG-dependant app, e.g. home-assistant:
k3s kubectl patch configmap APPNAME-cnpg-main-pgversion -n ix-APPNAME -p '{"data": {"version": "15"}}'
zfs.csi.openebs.io
报错类似图上,等待卷创建,实际上是因为openebs-zfs-node
和openebs-zfs-controller
没有部署成功。这两个东西是提供PVC的插件。
执行
k3s kubectl describe pod openebs-zfs-controller-0 -n kube-system
查看events
Events:
Type Reason Age From Message
Warning Failed 48m (x12 over 7h23m) kubelet Failed to pull image "k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://k8s.gcr.io/v2/": context deadline exceeded
Warning Failed 43m (x151 over 7h29m) kubelet (combined from similar events): Failed to pull image "k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://k8s.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Normal Pulling 38m (x72 over 7h39m) kubelet Pulling image "k8s.gcr.io/sig-storage/csi-resizer:v1.1.0"
Normal BackOff 3m33s (x1396 over 7h33m) kubelet Back-off pulling image "k8s.gcr.io/sig-storage/csi-resizer:v1.1.0"
可以发现镜像在k8s.gcr.io
,它是谷歌的仓库,由于科学上网速度太慢,或者是没有科学上网,所以无法拉取
解决方法
我们可以手动从国内的阿里镜像仓库拉取镜像,然后修改标签。
根据前面的describe
命令查看到需要的镜像有:csi-resizer:v1.1.0
,csi-snapshotter:v4.0.0
,snapshot-controller:v4.0.0
,csi-provisioner:v2.1.0
,csi-node-driver-registrar:v2.1.0
手动拉取并标记
docker pull registry.aliyuncs.com/google_containers/csi-resizer:v1.1.0
docker tag registry.aliyuncs.com/google_containers/csi-resizer:v1.1.0 k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
docker rmi registry.aliyuncs.com/google_containers/csi-resizer:v1.1.0
docker pull registry.aliyuncs.com/google_containers/csi-snapshotter:v4.0.0
docker tag registry.aliyuncs.com/google_containers/csi-snapshotter:v4.0.0 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
docker rmi registry.aliyuncs.com/google_containers/csi-snapshotter:v4.0.0
docker pull registry.aliyuncs.com/google_containers/snapshot-controller:v4.0.0
docker tag registry.aliyuncs.com/google_containers/snapshot-controller:v4.0.0 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
docker rmi registry.aliyuncs.com/google_containers/snapshot-controller:v4.0.0
docker pull registry.aliyuncs.com/google_containers/csi-provisioner:v2.1.0
docker tag registry.aliyuncs.com/google_containers/csi-provisioner:v2.1.0 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
docker rmi registry.aliyuncs.com/google_containers/csi-provisioner:v2.1.0
docker pull registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.1.0
docker tag registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.1.0 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
docker rmi registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.1.0
22.02.1版本
docker pull registry.aliyuncs.com/google_containers/csi-resizer:v1.2.0
docker tag registry.aliyuncs.com/google_containers/csi-resizer:v1.2.0 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0
docker rmi registry.aliyuncs.com/google_containers/csi-resizer:v1.2.0
docker pull registry.aliyuncs.com/google_containers/csi-snapshotter:v4.0.0
docker tag registry.aliyuncs.com/google_containers/csi-snapshotter:v4.0.0 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
docker rmi registry.aliyuncs.com/google_containers/csi-snapshotter:v4.0.0
docker pull registry.aliyuncs.com/google_containers/snapshot-controller:v4.0.0
docker tag registry.aliyuncs.com/google_containers/snapshot-controller:v4.0.0 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
docker rmi registry.aliyuncs.com/google_containers/snapshot-controller:v4.0.0
docker pull registry.aliyuncs.com/google_containers/csi-provisioner:v3.0.0
docker tag registry.aliyuncs.com/google_containers/csi-provisioner:v3.0.0 k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0
docker rmi registry.aliyuncs.com/google_containers/csi-provisioner:v3.0.0
docker pull registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.3.0
docker tag registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.3.0 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0
docker rmi registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.3.0
permission类
一般是在于安装自定义应用(custom app)时,权限设置错误
看日志中可以看到类似于 permission denied
readonlyfilesystem
遇到这类问题,修改权限为root即可
K3S启动失败
这其实是scale的BUG,具体引起的原因不清楚,体现在应用启动不来,已安装页面一直转圈圈
CRITICAL
Failed to start kubernetes cluster for Applications: 18
2022-03-21 08:02:08 (Asia/Shanghai)
命令查询
k3s kubectl get node
显示not ready
查询K3S服务
systemctl status k3s
显示失败等
解决办法
- 重启
- 重新选一下应用池
KeyError: 'nodePort'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 423, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 459, in __run_body
rv = await self.method(*([self] + args))
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf
res = await f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf
return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 404, in do_create
new_values, context = await self.normalise_and_validate_values(item_details, new_values, False, release_ds)
File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 332, in normalise_and_validate_values
dict_obj = await self.middleware.call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1318, in call
return await self._call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call
return await methodobj(*prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 58, in validate_values
await self.validate_question(
File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 81, in validate_question
await self.validate_question(
File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 81, in validate_question
await self.validate_question(
File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 81, in validate_question
await self.validate_question(
[Previous line repeated 1 more time]
File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 112, in validate_question
verrors, parent_value, parent_value[sub_question['variable']], sub_question,
KeyError: 'nodePort'
这个错误一般出现在安装自定义应用
解决办法
只需要随便填上nodeport。类型选择simple即可
常用命令
TrueNAS SCALE是使用的K3S,它的命令和K8S是一样的
在查看应用之前我们要先知道它的namespace
k3s kubectl get ns
然后就可以查询具体的应用了
k3s kubectl get pod -n <namespace>
重启应用
k3s kubectl delete pod <pod名> -n <namespace>
查询日志(和webUI里点击日志的效果是一样的)
k3s kubectl logs <pod名> -n <namespace>
查询事件或者描述(和webUI里点击事件的效果是一样的)
k3s kubectl describe pod <pod名> -n <namespace>
开启、停止应用(和webUI点击停止开启是一样的)
停止
k3s kubectl scale deployment <应用名> --replicas=0 -n <namespace>
开启
k3s kubectl scale deployment <应用名> --replicas=1 -n <namespace>
部分来自truecharts
41 条评论
我安装immich程序Kubernetes事件 Readiness probe failed: Get "http://172.16.1.2:32002/ping": context deadline exceeded (Client.Timeout exceeded while awaiting headers) 是哪里出错了,本人小白,请大神指点
我在truenas scale 23.10.1的“Manage Container Images”里,拉“snapshot-controller:6.2.2”,显示以下错误:“[EFAULT] Failed to pull image: failed to pull and unpack image "registry.k8s.io/sig-storage/snapshot-controller:6.2.2": failed to resolve reference "registry.k8s.io/sig-storage/snapshot-controller:6.2.2": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/sig-storage/snapshot-controller/manifests/6.2.2": dial tcp 74.125.203.82:443: i/o timeout”。是因为连不上服务器?但是翻墙了啊,根据这个网址,我也打不开。
还有个问题,nextcloud 从App Version: 27.1.3,Chart Version: 22.2.8,升级到App Version: 27.1.4,Chart Version: 22.2.17,就一直部署中,事件信息如下:“2023-12-10 13:45:12
Job completed
2023-12-10 13:45:04
Created container nextcloud
2023-12-10 13:45:04
Started container nextcloud
2023-12-10 13:45:00
Created pod: nextcloud-nextcloud-cron-28369785-zlh4m
2023-12-10 13:45:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369785-zlh4m to ix-truenas
2023-12-10 13:45:00
Add eth0 [172.17.24.28/16] from ix-net
2023-12-10 13:45:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:40:16
Job completed
2023-12-10 13:40:04
Created container nextcloud
2023-12-10 13:40:04
Started container nextcloud
2023-12-10 13:40:00
Created pod: nextcloud-nextcloud-cron-28369780-g6q9r
2023-12-10 13:40:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369780-g6q9r to ix-truenas
2023-12-10 13:40:00
Add eth0 [172.17.24.27/16] from ix-net
2023-12-10 13:40:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:35:13
Job completed
2023-12-10 13:35:03
Created container nextcloud
2023-12-10 13:35:03
Started container nextcloud
2023-12-10 13:35:00
Created pod: nextcloud-nextcloud-cron-28369775-krwx4
2023-12-10 13:35:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369775-krwx4 to ix-truenas
2023-12-10 13:35:00
Add eth0 [172.17.24.26/16] from ix-net
2023-12-10 13:35:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:30:16
Job completed
2023-12-10 13:30:16
Saw completed job: nextcloud-preview-cron-28369770, status: Complete
2023-12-10 13:30:16
Job completed
2023-12-10 13:30:05
Started container nextcloud
2023-12-10 13:30:04
Created container nextcloud
2023-12-10 13:30:04
Started container nextcloud
2023-12-10 13:30:04
Created container nextcloud
2023-12-10 13:30:01
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:30:01
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:30:00
Created job nextcloud-preview-cron-28369770
2023-12-10 13:30:00
Created pod: nextcloud-nextcloud-cron-28369770-cfrd2
2023-12-10 13:30:00
Created pod: nextcloud-preview-cron-28369770-pnd69
2023-12-10 13:30:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369770-cfrd2 to ix-truenas
2023-12-10 13:30:00
Successfully assigned ix-nextcloud/nextcloud-preview-cron-28369770-pnd69 to ix-truenas
2023-12-10 13:30:00
Add eth0 [172.17.24.25/16] from ix-net
2023-12-10 13:30:00
Add eth0 [172.17.24.24/16] from ix-net
2023-12-10 13:25:14
Job completed
2023-12-10 13:25:04
Created container nextcloud
2023-12-10 13:25:04
Started container nextcloud
2023-12-10 13:25:00
Created pod: nextcloud-nextcloud-cron-28369765-qkwlz
2023-12-10 13:25:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369765-qkwlz to ix-truenas
2023-12-10 13:25:00
Add eth0 [172.17.24.23/16] from ix-net
2023-12-10 13:25:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:20:12
Job completed
2023-12-10 13:20:03
Created container nextcloud
2023-12-10 13:20:03
Started container nextcloud
2023-12-10 13:20:00
Created pod: nextcloud-nextcloud-cron-28369760-2hp2p
2023-12-10 13:20:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369760-2hp2p to ix-truenas
2023-12-10 13:20:00
Add eth0 [172.17.24.22/16] from ix-net
2023-12-10 13:20:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:15:15
Job completed
2023-12-10 13:15:04
Created container nextcloud
2023-12-10 13:15:04
Started container nextcloud
2023-12-10 13:15:00
Created pod: nextcloud-nextcloud-cron-28369755-sv87n
2023-12-10 13:15:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369755-sv87n to ix-truenas
2023-12-10 13:15:00
Add eth0 [172.17.24.21/16] from ix-net
2023-12-10 13:15:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:10:13
Job completed
2023-12-10 13:10:03
Created container nextcloud
2023-12-10 13:10:03
Started container nextcloud
2023-12-10 13:10:00
Created pod: nextcloud-nextcloud-cron-28369750-lw9g2
2023-12-10 13:10:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369750-lw9g2 to ix-truenas
2023-12-10 13:10:00
Add eth0 [172.17.24.20/16] from ix-net
2023-12-10 13:10:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:05:17
Job completed
2023-12-10 13:05:04
Created container nextcloud
2023-12-10 13:05:04
Started container nextcloud
2023-12-10 13:05:00
Created pod: nextcloud-nextcloud-cron-28369745-zd9hj
2023-12-10 13:05:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369745-zd9hj to ix-truenas
2023-12-10 13:05:00
Add eth0 [172.17.24.19/16] from ix-net
2023-12-10 13:05:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:00:14
Job completed
2023-12-10 13:00:14
Saw completed job: nextcloud-preview-cron-28369740, status: Complete
2023-12-10 13:00:14
Job completed
2023-12-10 13:00:04
Created container nextcloud
2023-12-10 13:00:04
Created container nextcloud
2023-12-10 13:00:04
Started container nextcloud
2023-12-10 13:00:04
Started container nextcloud
2023-12-10 13:00:01
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:00:01
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:00:00
Created job nextcloud-preview-cron-28369740
2023-12-10 13:00:00
Created pod: nextcloud-nextcloud-cron-28369740-f9nmz
2023-12-10 13:00:00
Created pod: nextcloud-preview-cron-28369740-5jxz4
2023-12-10 13:00:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369740-f9nmz to ix-truenas
2023-12-10 13:00:00
Successfully assigned ix-nextcloud/nextcloud-preview-cron-28369740-5jxz4 to ix-truenas
2023-12-10 13:00:00
Add eth0 [172.17.24.17/16] from ix-net
2023-12-10 13:00:00
Add eth0 [172.17.24.18/16] from ix-net
2023-12-10 12:55:12
Job completed
2023-12-10 12:55:04
Created container nextcloud
2023-12-10 12:55:04
Started container nextcloud
2023-12-10 12:55:00
Created pod: nextcloud-nextcloud-cron-28369735-v8rr2
2023-12-10 12:55:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369735-v8rr2 to ix-truenas
2023-12-10 12:55:00
Add eth0 [172.17.24.16/16] from ix-net
2023-12-10 12:55:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 12:50:15
Job completed
2023-12-10 12:50:03
Created container nextcloud
2023-12-10 12:50:03
Started container nextcloud
2023-12-10 12:50:00
Created pod: nextcloud-nextcloud-cron-28369730-9dvdg
2023-12-10 12:50:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369730-9dvdg to ix-truenas
2023-12-10 12:50:00
Add eth0 [172.17.24.15/16] from ix-net
2023-12-10 12:50:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 12:40:21
Back-off restarting failed container nextcloud in pod nextcloud-57dc848b56-c664b_ix-nextcloud(93270ebe-9a63-4629-a015-034ddf571c2a)
2023-12-10 12:39:49
Startup probe failed: dial tcp 172.17.24.11:9000: connect: connection refused
2023-11-24 11:05:00
(combined from similar events): Created job nextcloud-nextcloud-cron-28369785”
没看出啥问题。
大佬,我这边安装社区里面的所有应用都报下面的错误,不知道有没有解决之道啊?
await Namespace.create(data['body'])错误: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/namespace.py", line 28, in do_create
File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 106, in create
return await cls.call(cls.uri(File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 84, in call
return await cls.api_call(uri, mode, body, headers, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 45, in api_call
async with cls.request(endpoint, mode, body, headers, timeout) as resp:File "/usr/lib/python3.9/contextlib.py", line 175, in aenter
return await self.gen.__anext__()File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 33, in request
raise ApiException(f'Received {resp.status!r} response code from {endpoint!r}')middlewared.plugins.kubernetes_linux.k8s.exceptions.ApiException: Received 422 response code from '/api/v1/namespaces'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
await self.middleware.call('k8s.namespace.create', {'body': namespace_body})File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 465, in do_create
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1368, in call
return await self._call(File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1317, in _call
return await methodobj(*prepared_call.args)File "/usr/lib/python3/dist-packages/middlewared/service.py", line 940, in create
rv = await self.middleware._call(File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1317, in _call
return await methodobj(*prepared_call.args)File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1247, in nf
res = await f(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1379, in nf
return await func(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/namespace.py", line 30, in do_create
raise CallError(f'Unable to create namespace: {e}')middlewared.service_exception.CallError: [EFAULT] Unable to create namespace: Received 422 response code from '/api/v1/namespaces'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
await Namespace.delete(namespace)File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/namespace.py", line 48, in do_delete
File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 118, in delete
return await cls.call(cls.uri(File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 84, in call
return await cls.api_call(uri, mode, body, headers, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 45, in api_call
async with cls.request(endpoint, mode, body, headers, timeout) as resp:File "/usr/lib/python3.9/contextlib.py", line 175, in aenter
return await self.gen.__anext__()File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 33, in request
raise ApiException(f'Received {resp.status!r} response code from {endpoint!r}')middlewared.plugins.kubernetes_linux.k8s.exceptions.ApiException: Received 404 response code from '/api/v1/namespaces/ix-qbitmanage'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
await self.futureFile "/usr/lib/python3/dist-packages/middlewared/job.py", line 427, in run
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 465, in __run_body
rv = await self.method(*([self] + args))File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1247, in nf
res = await f(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1379, in nf
return await func(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 499, in do_create
await self.post_remove_tasks(data['release_name'], job)File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 640, in post_remove_tasks
await self.middleware.call('k8s.namespace.delete', get_namespace(release_name))File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1368, in call
return await self._call(File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1317, in _call
return await methodobj(*prepared_call.args)File "/usr/lib/python3/dist-packages/middlewared/service.py", line 962, in delete
rv = await self.middleware._call(File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1317, in _call
return await methodobj(*prepared_call.args)File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1247, in nf
res = await f(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1379, in nf
return await func(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/namespace.py", line 50, in do_delete
raise CallError(f'Unable delete namespace: {e}')middlewared.service_exception.CallError: [EFAULT] Unable delete namespace: Received 404 response code from '/api/v1/namespaces/ix-qbitmanage'
sagit老师,使用您博客的故障排除PVC方式安装librespeed成功了,但安装vaultwarden在部署阶段停住了,部署vaultwarden的APP应用时,我有提前在命令行拉取了ghcr.io/cloudnative-pg/pgbouncer:1.19.1镜像
在truenas命令行手动拉取下面两个镜像,再安装vaultwarden就可以了
docker pull ghcr.io/cloudnative-pg/postgresql:15.3
docker pull ghcr.io/cloudnative-pg/pgbouncer:1.19.1
群在哪里呢?
我在truenas scale 23.10.1的“Manage Container Images”里,拉“snapshot-controller:6.2.2”,显示以下错误:“[EFAULT] Failed to pull image: failed to pull and unpack image "registry.k8s.io/sig-storage/snapshot-controller:6.2.2": failed to resolve reference "registry.k8s.io/sig-storage/snapshot-controller:6.2.2": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/sig-storage/snapshot-controller/manifests/6.2.2": dial tcp 74.125.203.82:443: i/o timeout”。是因为连不上服务器?但是翻墙了啊,根据这个网址,我也打不开。
还有个问题,nextcloud 从App Version: 27.1.3,Chart Version: 22.2.8,升级到App Version: 27.1.4,Chart Version: 22.2.17,就一直部署中,事件信息如下:“2023-12-10 13:45:12
Job completed
2023-12-10 13:45:04
Created container nextcloud
2023-12-10 13:45:04
Started container nextcloud
2023-12-10 13:45:00
Created pod: nextcloud-nextcloud-cron-28369785-zlh4m
2023-12-10 13:45:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369785-zlh4m to ix-truenas
2023-12-10 13:45:00
Add eth0 [172.17.24.28/16] from ix-net
2023-12-10 13:45:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:40:16
Job completed
2023-12-10 13:40:04
Created container nextcloud
2023-12-10 13:40:04
Started container nextcloud
2023-12-10 13:40:00
Created pod: nextcloud-nextcloud-cron-28369780-g6q9r
2023-12-10 13:40:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369780-g6q9r to ix-truenas
2023-12-10 13:40:00
Add eth0 [172.17.24.27/16] from ix-net
2023-12-10 13:40:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:35:13
Job completed
2023-12-10 13:35:03
Created container nextcloud
2023-12-10 13:35:03
Started container nextcloud
2023-12-10 13:35:00
Created pod: nextcloud-nextcloud-cron-28369775-krwx4
2023-12-10 13:35:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369775-krwx4 to ix-truenas
2023-12-10 13:35:00
Add eth0 [172.17.24.26/16] from ix-net
2023-12-10 13:35:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:30:16
Job completed
2023-12-10 13:30:16
Saw completed job: nextcloud-preview-cron-28369770, status: Complete
2023-12-10 13:30:16
Job completed
2023-12-10 13:30:05
Started container nextcloud
2023-12-10 13:30:04
Created container nextcloud
2023-12-10 13:30:04
Started container nextcloud
2023-12-10 13:30:04
Created container nextcloud
2023-12-10 13:30:01
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:30:01
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:30:00
Created job nextcloud-preview-cron-28369770
2023-12-10 13:30:00
Created pod: nextcloud-nextcloud-cron-28369770-cfrd2
2023-12-10 13:30:00
Created pod: nextcloud-preview-cron-28369770-pnd69
2023-12-10 13:30:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369770-cfrd2 to ix-truenas
2023-12-10 13:30:00
Successfully assigned ix-nextcloud/nextcloud-preview-cron-28369770-pnd69 to ix-truenas
2023-12-10 13:30:00
Add eth0 [172.17.24.25/16] from ix-net
2023-12-10 13:30:00
Add eth0 [172.17.24.24/16] from ix-net
2023-12-10 13:25:14
Job completed
2023-12-10 13:25:04
Created container nextcloud
2023-12-10 13:25:04
Started container nextcloud
2023-12-10 13:25:00
Created pod: nextcloud-nextcloud-cron-28369765-qkwlz
2023-12-10 13:25:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369765-qkwlz to ix-truenas
2023-12-10 13:25:00
Add eth0 [172.17.24.23/16] from ix-net
2023-12-10 13:25:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:20:12
Job completed
2023-12-10 13:20:03
Created container nextcloud
2023-12-10 13:20:03
Started container nextcloud
2023-12-10 13:20:00
Created pod: nextcloud-nextcloud-cron-28369760-2hp2p
2023-12-10 13:20:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369760-2hp2p to ix-truenas
2023-12-10 13:20:00
Add eth0 [172.17.24.22/16] from ix-net
2023-12-10 13:20:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:15:15
Job completed
2023-12-10 13:15:04
Created container nextcloud
2023-12-10 13:15:04
Started container nextcloud
2023-12-10 13:15:00
Created pod: nextcloud-nextcloud-cron-28369755-sv87n
2023-12-10 13:15:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369755-sv87n to ix-truenas
2023-12-10 13:15:00
Add eth0 [172.17.24.21/16] from ix-net
2023-12-10 13:15:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:10:13
Job completed
2023-12-10 13:10:03
Created container nextcloud
2023-12-10 13:10:03
Started container nextcloud
2023-12-10 13:10:00
Created pod: nextcloud-nextcloud-cron-28369750-lw9g2
2023-12-10 13:10:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369750-lw9g2 to ix-truenas
2023-12-10 13:10:00
Add eth0 [172.17.24.20/16] from ix-net
2023-12-10 13:10:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:05:17
Job completed
2023-12-10 13:05:04
Created container nextcloud
2023-12-10 13:05:04
Started container nextcloud
2023-12-10 13:05:00
Created pod: nextcloud-nextcloud-cron-28369745-zd9hj
2023-12-10 13:05:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369745-zd9hj to ix-truenas
2023-12-10 13:05:00
Add eth0 [172.17.24.19/16] from ix-net
2023-12-10 13:05:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:00:14
Job completed
2023-12-10 13:00:14
Saw completed job: nextcloud-preview-cron-28369740, status: Complete
2023-12-10 13:00:14
Job completed
2023-12-10 13:00:04
Created container nextcloud
2023-12-10 13:00:04
Created container nextcloud
2023-12-10 13:00:04
Started container nextcloud
2023-12-10 13:00:04
Started container nextcloud
2023-12-10 13:00:01
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:00:01
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 13:00:00
Created job nextcloud-preview-cron-28369740
2023-12-10 13:00:00
Created pod: nextcloud-nextcloud-cron-28369740-f9nmz
2023-12-10 13:00:00
Created pod: nextcloud-preview-cron-28369740-5jxz4
2023-12-10 13:00:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369740-f9nmz to ix-truenas
2023-12-10 13:00:00
Successfully assigned ix-nextcloud/nextcloud-preview-cron-28369740-5jxz4 to ix-truenas
2023-12-10 13:00:00
Add eth0 [172.17.24.17/16] from ix-net
2023-12-10 13:00:00
Add eth0 [172.17.24.18/16] from ix-net
2023-12-10 12:55:12
Job completed
2023-12-10 12:55:04
Created container nextcloud
2023-12-10 12:55:04
Started container nextcloud
2023-12-10 12:55:00
Created pod: nextcloud-nextcloud-cron-28369735-v8rr2
2023-12-10 12:55:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369735-v8rr2 to ix-truenas
2023-12-10 12:55:00
Add eth0 [172.17.24.16/16] from ix-net
2023-12-10 12:55:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 12:50:15
Job completed
2023-12-10 12:50:03
Created container nextcloud
2023-12-10 12:50:03
Started container nextcloud
2023-12-10 12:50:00
Created pod: nextcloud-nextcloud-cron-28369730-9dvdg
2023-12-10 12:50:00
Successfully assigned ix-nextcloud/nextcloud-nextcloud-cron-28369730-9dvdg to ix-truenas
2023-12-10 12:50:00
Add eth0 [172.17.24.15/16] from ix-net
2023-12-10 12:50:00
Container image "tccr.io/truecharts/nextcloud-fpm:v27.1.4@sha256:37c75210a17c144fb64a8080428fefe6ae11016e2db31416004bc4db62f32e20" already present on machine
2023-12-10 12:40:21
Back-off restarting failed container nextcloud in pod nextcloud-57dc848b56-c664b_ix-nextcloud(93270ebe-9a63-4629-a015-034ddf571c2a)
2023-12-10 12:39:49
Startup probe failed: dial tcp 172.17.24.11:9000: connect: connection refused
2023-11-24 11:05:00
(combined from similar events): Created job nextcloud-nextcloud-cron-28369785”
没看出啥问题。
大神,安装RESILIO SYNC报这个错误咋整,之前成功装过一次,后来主机改IP地址了,APP里面也不见了,重装就报这个错误
Error: Traceback (most recent call last):
await self.futureFile "/usr/lib/python3/dist-packages/middlewared/job.py", line 426, in run
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 461, in __run_body
rv = await self.method(*([self] + args))File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1152, in nf
res = await f(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1284, in nf
return await func(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 397, in do_create
await self.middleware.call('kubernetes.validate_k8s_setup')File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1306, in call
return await self._call(File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1255, in _call
return await methodobj(*prepared_call.args)File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/update.py", line 496, in validate_k8s_setup
raise CallError(error)middlewared.service_exception.CallError: [EFAULT] Kubernetes service is not running.
请问大佬这个问题如何解决,硬盘是从阵列卡上弄下来的,初始化存储池的时候报 Error: [EFAULT] Disk: 'sdb' is incorrectly formatted with Data Integrity Feature (DIF).
请问解决了吗?我也是同样的问题
请问下安装metallb卡在下面这个问题无法启动,是什么原因呢
MountVolume.SetUp failed for volume "kube-api-access-prkbq" : object "ix-metallb"/"kube-root-ca.crt" not registered
docker in docker 部署firefox无法输入中文
firefox是按大佬的dockercompose视频教程部署的。
我在windows上使用chrome连到truenas的firefox容器vnc,但windows的搜狗输入法无法使用,只会输入按的那个键位。
不止如此firefox和windows之间的复制粘贴都需要借用容器内部的可视剪切板,就好像firefox本体被隔离了一样
我看日本那边有人是把firefox的镜像和日语输入法重新打包成一个新的镜像,有的是借用D-BUS做中介
但看来看去都不是我个小白能看懂的(╯‵□′)╯︵┴─┴
我本来准备写个油猴脚本进行RSS下动漫的,因为我科学上网依赖于浏览器插件只能这样。
现在既打不了中文又打不了日语,搞得我好难受,特此求教大佬
你好,看了你的很多视频自己也构建了centos 7 虚拟机。
在局域网下能ping和ssh虚拟机,可是外网+VPN进来,或者traefik反向代理时却无法访问虚拟机,请问是什么原因呢?
dalao 你好,我想问下我raidz 在更换硬盘的时候断电了,再开机,看不到进度,要什么办
请问一下,我truenas频繁重启是怎么回事,有时候安装着app突然就重启了,基本上是报计划外重启的那个log,有时候重启完“已安装的应用”里面任何app都没有了,要多重启几次才出现。请问一下,是bug还是设置的问题?
说实话truenas scale 集成的容器管理功能问题挺多,升级系统也容易带来新的问题。
另外k3s应该可以看成k8s的轻量版本,都是用来管理容器集群的。truenas scale 还是用的docker,不过上层套了层k3s来管理容器
安装社区版的nextcloud成功后, 点了一次app里的升级, 然后就再也无法启动了, 删除应用重装也不行, 事件里反复创建容器、删除容器, 遇到的错误只有
Startup probe failed: Get "http://172.16.2.124:7867/push/test/cookie": dial tcp 172.16.2.124:7867: connect: connection refused
连接探测失败, 应用自身日志里没有错误信息, 这个该如何解决, 求教。
2022-08-10 16:20:02
Deleted job nextcloud-cronjob-27668650
2022-08-10 16:20:02
Saw completed job: nextcloud-cronjob-27668660, status: Complete
2022-08-10 16:20:02
Job completed
2022-08-10 16:20:01
Started container nextcloud
2022-08-10 16:20:01
Created container nextcloud
2022-08-10 16:20:00
Container image "tccr.io/truecharts/nextcloud-fpm:v24.0.3@sha256:bd950c86f788ad9937941b48917209238a06aef5f9e1546996edb4e8220a8ec0" already present on machine
2022-08-10 16:20:00
Add eth0 [172.16.2.127/16] from ix-net
Successfully assigned ix-nextcloud/nextcloud-cronjob-27668660-gncs2 to ix-truenas
2022-08-10 16:20:00
Created pod: nextcloud-cronjob-27668660-gncs2
2022-08-10 16:20:00
Created job nextcloud-cronjob-27668660
2022-08-10 16:15:01
Deleted job nextcloud-cronjob-27668645
2022-08-10 16:15:01
Saw completed job: nextcloud-cronjob-27668655, status: Complete
2022-08-10 16:15:01
Job completed
2022-08-10 16:15:01
Started container nextcloud
2022-08-10 16:15:01
Created container nextcloud
2022-08-10 16:15:00
Container image "tccr.io/truecharts/nextcloud-fpm:v24.0.3@sha256:bd950c86f788ad9937941b48917209238a06aef5f9e1546996edb4e8220a8ec0" already present on machine
2022-08-10 16:15:00
Add eth0 [172.16.2.126/16] from ix-net
Successfully assigned ix-nextcloud/nextcloud-cronjob-27668655-n6fv7 to ix-truenas
2022-08-10 16:15:00
Created pod: nextcloud-cronjob-27668655-n6fv7
2022-08-10 16:15:00
Created job nextcloud-cronjob-27668655
2022-08-10 16:08:47
Startup probe failed: Get "http://172.16.2.124:7867/push/test/cookie": dial tcp 172.16.2.124:7867: connect: connection refused
2022-08-10 16:10:01
Deleted job nextcloud-cronjob-27668640
2022-08-10 16:10:01
Saw completed job: nextcloud-cronjob-27668650, status: Complete
2022-08-10 16:10:01
Job completed
2022-08-10 16:10:01
Started container nextcloud
2022-08-10 16:10:01
Created container nextcloud
2022-08-10 16:10:00
Container image "tccr.io/truecharts/nextcloud-fpm:v24.0.3@sha256:bd950c86f788ad9937941b48917209238a06aef5f9e1546996edb4e8220a8ec0" already present on machine
同样的,解决了吗?咋解决的?
还有个错误信息是: Back-off restarting failed container, 不确定到底是什么引起的
应用名称: nextcloud版本: 15.2.35在此贴上我的配置信息:
:
Show Advanced Controller Settings: falseShow Expert Configuration Options: falseImage Secrets:
NEXTCLOUD_ADMIN_USER (First Install Only): --NEXTCLOUD_ADMIN_PASSWORD (First Install Only): --Image Environment:
Trusted Proxies (First Install Only - Advanced): 172.16.0.0/16 127.0.0.1PHP_MEMORY_LIMIT: 1GPHP_UPLOAD_LIMIT: 10GAccess IP: 192.168.50.2Preview Generation Configuration:
Preview Max X: 2048Preview Max Y: 2048Preview Max Memory: 512Preview Max Filesize Image: 150Generate previews for PNG: trueGenerate previews for JPEG : trueGenerate previews for GIF: trueGenerate previews for BMP: trueGenerate previews for XBitmap: trueGenerate previews for MP3: trueGenerate previews for MarkDown: trueGenerate previews for OpenDocument: trueGenerate previews for TXT: trueGenerate previews for Krita: trueGenerate previews for Illustrator: falseGenerate previews for HEIC: trueGenerate previews for Movie: trueGenerate previews for MSOffice2003: falseGenerate previews for MSOffice2007: falseGenerate previews for MSOfficeDoc: falseGenerate previews for PDF: falseGenerate previews for Photoshop: falseGenerate previews for Postscript: falseGenerate previews for StarOffice: falseGenerate previews for SVG: falseGenerate previews for TIFF: falseGenerate previews for Font: false时区: 'Asia/Shanghai' timezoneShow Expert Configuration: falseConfigure Service(s):
Main Service:
Service Type: SimpleService's Port(s) Configuration:
Main Service Port Configuration:
端口: 10020Show Advanced Settings: falseShow Expert Config: falseIntegrated Persistent Storage:
App html Storage:
Type of Storage: Host Path (simple)Automatic Permissions: falseRead Only": false主机路径: /mnt/vm/app/nextcloudShow Advanced Options: falseUserData Storage:
Type of Storage: Host Path (simple)Automatic Permissions: falseRead Only": false主机路径: /mnt/home/usersShow Advanced Options: false:
Main Ingress:
Enable Ingress: true主机: 1TLS-Settings: 1(Advanced) Traefik Entrypoint: websecureShow Expert Configuration Options: falseContainer Security Settings:
Change PUID / UMASK values: falseShow Advanced Security Settings: falsePod Security Context:
runAsUser: 0runAsGroup: 0fsGroup: 33When should we take ownership?: OnRootMismatchSet Custom Resource Limits/Requests (Advanced): falseGPU Configuration:
GPU Resource (gpu.intel.com/i915): Allocate 0 gpu.intel.com/i915 GPU:
VPN:
类型: disabledCodeserver:
已启用: falsePromtail:
已启用: falseNetshoot:
已启用: false期望大佬看看
大佬22.02.2.1版本可以更新吗,没有科学上网,更新后害怕不可以使用
Error: ImagePullBackOff
手动拉取镜像后,再安装应用,出现了这个问题
重启一下
2022-06-22 12:36:14
error killing pod: failed to "KillPodSandbox" for "026140c2-f4d7-4e87-9d95-5e7cc2597feb" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"alist-6c6958c7df-8gv26_ix-alist\" network: cni config uninitialized"
2022-06-22 12:40:02
Marking for deletion Pod ix-alist/svclb-alist-2ttv8
每个应用都是这样报错呢?
网络中设置http代理,指向docker容器中的clash服务. (其他设备测试正常)
似乎代理并没有效果,并且开启http代理之后,已安装应用界面会无限loading
请问下是什么原因呢
可能吧本地地址也代理了,你可以只设置docker 代理,博客里搜docker 有个docker http代理
waiting for a volume to be created, either by external provisioner "zfs.csi.openebs.io" or manually created by system administrator
这个是什么回事
这排错里面不是有的嘛
apt不能运行,提示:
zsh: permission denied: apt
使用的root用户
chmod +x /usr/bin/apt*
0/1 nodes are available: 1 node(s) had taint {ix-svc-stop: }, that the pod didn't tolerate.
0/1 nodes are available: 1 node(s) had taint {ix-svc-stop: }, that the pod didn't tolerate.
0/1 nodes are available: 1 node(s) had taint {ix-svc-stop: }, that the pod didn't tolerate.
0/1 nodes are available: 1 node(s) had taint {ix-svc-stop: }, that the pod didn't tolerate.
0/1 nodes are available: 1 node(s) had taint {ix-svc-stop: }, that the pod didn't tolerate.
2022-04-11 22:37:55
Created pod: transmission0-df847cc8f-sqv8w
0/1 nodes are available: 1 node(s) had taint {ix-svc-stop: }, that the pod didn't tolerate.
2022-04-11 22:19:46
Scaled up replica set transmission0-df847cc8f to 1
skip schedule deleting pod: ix-transmission0/transmission0-df847cc8f-r5t6z
2022-04-11 22:21:54
Deleted pod: transmission0-df847cc8f-r5t6z
2022-04-11 21:06:12
Scaled down replica set transmission0-df847cc8f to 0
0/1 nodes are available: 1 node(s) had taint {ix-svc-stop: }, that the pod didn't tolerate.
0/1 nodes are available: 1 node(s) had taint {ix-svc-stop: }, that the pod didn't tolerate.
2022-04-05 21:28:29
Failed to pull image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413": rpc error: code = Unknown desc = Error response from daemon: Get "https://ghcr.io/v2/": read tcp 192.168.2.88:46484->20.205.243.164:443: read: connection reset by peer
2022-04-05 21:27:06
Pulling image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413"
2022-04-05 21:27:07
Error: ImagePullBackOff
2022-04-05 21:27:07
Back-off pulling image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413"
2022-04-05 21:27:07
Error: ImagePullBackOff
2022-04-05 21:27:07
Back-off pulling image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413"
2022-04-05 21:27:06
Error: ErrImagePull
2022-04-05 21:27:48
Failed to pull image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413": rpc error: code = Unknown desc = Error response from daemon: Get "https://ghcr.io/v2/": read tcp 192.168.2.88:46336->20.205.243.164:443: read: connection reset by peer
2022-04-05 21:27:06
Error: ErrImagePull
2022-04-05 21:27:48
Failed to pull image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413": rpc error: code = Unknown desc = Error response from daemon: Get "https://ghcr.io/v2/": read tcp 192.168.2.88:46334->20.205.243.164:443: read: connection reset by peer
2022-04-05 21:27:06
Pulling image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413"
2022-04-05 21:27:22
Failed to pull image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413": rpc error: code = Unknown desc = Error response from daemon: Get "https://ghcr.io/v2/": read tcp 192.168.2.88:46206->20.205.243.164:443: read: connection reset by peer
2022-04-05 21:27:21
Failed to pull image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413": rpc error: code = Unknown desc = Error response from daemon: Get "https://ghcr.io/v2/": read tcp 192.168.2.88:46194->20.205.243.164:443: read: connection reset by peer
2022-04-05 21:27:06
Failed to pull image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413": rpc error: code = Unknown desc = Error response from daemon: Get "https://ghcr.io/v2/": read tcp 192.168.2.88:46102->20.205.243.164:443: read: connection reset by peer
2022-04-05 21:27:06
Failed to pull image "ghcr.io/truecharts/postgresql:v14.2.0@sha256:73deda809571b1ba9e70dbe7223dfdee3bf17806fad1ad14fd809314372d1413": rpc error: code = Unknown desc = Error response from daemon: Get "https://ghcr.io/v2/": read tcp 192.168.2.88:46092->20.205.243.164:443: read: connection reset by peer
2022-04-05 21:27:06
Started container hostpatch
老大 安装home-a这个报错是什么原因呢
没有科学上网
大佬,安装社区应用qbittorrent出现一下错误应该怎么解决?
await self.future错误:Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 423, in run
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 459, in __run_body
rv = await self.method(*([self] + args))File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf
res = await f(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf
return await func(*args, **kwargs)File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 404, in do_create
new_values, context = await self.normalise_and_validate_values(item_details, new_values, False, release_ds)File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 332, in normalise_and_validate_values
dict_obj = await self.middleware.call(File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1318, in call
return await self._call(File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call
return await methodobj(*prepared_call.args)File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 58, in validate_values
await self.validate_question(File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 81, in validate_question
await self.validate_question(File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 81, in validate_question
await self.validate_question(File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 81, in validate_question
await self.validate_question([Previous line repeated 1 more time]
verrors, parent_value, parent_value[sub_question['variable']], sub_question,File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/validation.py", line 112, in validate_question
KeyError: 'nodePort'
看博客置顶的排错
发现经常出现:
Back-off restarting failed container
不知道楼主可有解决之道?
mosdns升级出现这个错误,然后删掉,重新安装解决问题。
我按照教程部署可道云的时候就一直是这个错误,还不知道怎么解决。。。
说话就说一半,你那个命令行在哪里输入的???冷不丁就给你来上一句输入这个命令行,再简单配置下就好了
我也是不知道怎么解决
这种错误就是要看日志的
我就之前第一个错误。。。没挂梯子就用不了
所以emmmm搞不懂为啥普通的docker不选择,要用k3s。可能k3s有其他优点吧,虽然可能我们普通用户用不到。