mattyx
-
Posts
102 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by mattyx
-
-
Got it. Thanks again for the help!
-
Oddly, after crossing about 5.5% parity check completion, the parity speed has returned to normal levels (currently ~142MB/sec). I am also seeing more normal CPU utilization (~20%, no cores maxed). I'm not sure why, but I'll take it. I'll keep monitoring this for a few more hours, and will mark this solved if the slow parity issue doesn't return.
In the meantime, if there are any other issues I should look into, I welcome the pointers.
Thanks to Squid and JorgeB for the help!!
M
-
Got it, thanks! Here's the output:
# sar -dp 5 5
Linux 4.19.107-Unraid (Tower) 01/02/2021 _x86_64_ (6 CPU)11:54:41 AM DEV tps rkB/s wkB/s dkB/s areq-sz aqu-sz await %util
11:54:46 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM loop3 0.20 0.00 0.80 0.00 4.00 0.00 0.00 0.00
11:54:46 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM sdb 8.80 2011.20 31.20 0.00 232.09 0.10 72.50 9.84
11:54:46 AM sdc 6.00 2011.20 4.00 0.00 335.87 0.04 68.93 3.72
11:54:46 AM sdd 5.20 2011.20 0.00 0.00 386.77 0.01 9.81 1.26
11:54:46 AM sdg 7.80 2011.20 16.00 0.00 259.90 0.01 1.18 0.58
11:54:46 AM sdh 6.60 2011.20 4.00 0.00 305.33 0.01 23.30 0.92
11:54:46 AM sdi 5.80 2011.20 0.00 0.00 346.76 0.01 1.97 0.84
11:54:46 AM sdj 7.00 2011.20 8.00 0.00 288.46 0.01 1.23 0.58
11:54:46 AM sde 5.00 2011.20 0.00 0.00 402.24 0.01 3.08 1.02
11:54:46 AM sdf 8.40 2017.60 35.20 0.00 244.38 0.29 403.21 29.50
11:54:46 AM md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM md4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM md5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM md6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM md9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM md1 0.00 0.00 0.00 0.00 0.00 155.03 0.00 100.02
11:54:46 AM md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM md4 0.00 0.00 0.00 0.00 0.00 8.00 0.00 100.02
11:54:46 AM md5 0.00 0.00 0.00 0.00 0.00 8.00 0.00 100.02
11:54:46 AM md6 0.20 0.00 0.20 0.00 1.00 0.35 0.00 34.60
11:54:46 AM md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:46 AM md9 0.00 0.00 0.00 0.00 0.00 362.07 0.00 100.02
11:54:46 AM nvme0n1p1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0011:54:46 AM DEV tps rkB/s wkB/s dkB/s areq-sz aqu-sz await %util
11:54:51 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:51 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:51 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:51 AM loop3 0.20 0.00 0.80 0.00 4.00 0.00 0.00 0.00
11:54:51 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:51 AM nvme0n1 0.20 0.00 0.80 0.00 4.00 0.12 0.00 12.48
11:54:51 AM sdb 41.00 6505.60 164.80 0.00 162.69 0.26 56.82 25.66
11:54:51 AM sdc 24.00 6484.00 23.20 0.00 271.13 0.16 24.77 15.56
11:54:51 AM sdd 19.80 6630.40 1.60 0.00 334.95 0.04 3.13 3.86
11:54:51 AM sdg 27.20 6628.80 49.60 0.00 245.53 0.02 1.09 1.98
11:54:51 AM sdh 25.00 6648.00 23.20 0.00 266.85 0.04 10.62 3.42
11:54:51 AM sdi 19.00 6628.80 0.00 0.00 348.88 0.04 3.11 3.68
11:54:51 AM sdj 29.00 6629.60 70.40 0.00 231.03 0.02 1.08 2.12
11:54:51 AM sde 19.00 6628.80 0.00 0.00 348.88 0.04 3.21 3.80
11:54:51 AM sdf 43.60 6669.60 168.00 0.00 156.83 0.24 85.37 23.62
11:54:51 AM md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:51 AM md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:51 AM md4 0.80 0.00 25.60 0.00 32.00 0.00 0.00 0.00
11:54:51 AM md5 0.60 0.00 19.20 0.00 32.00 0.00 0.00 0.00
11:54:51 AM md6 0.40 0.00 0.30 0.00 0.75 0.00 0.00 0.00
11:54:51 AM md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:51 AM md9 2.80 0.00 42.40 0.00 15.14 0.00 0.00 0.00
11:54:51 AM md1 0.00 0.00 0.00 0.00 0.00 155.00 0.00 100.00
11:54:51 AM md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:51 AM md4 1.40 0.00 44.80 0.00 32.00 8.00 22283.00 100.00
11:54:51 AM md5 1.20 0.00 38.40 0.00 32.00 8.00 19750.67 100.00
11:54:51 AM md6 0.40 0.00 0.30 0.00 0.75 0.24 1471.50 24.26
11:54:51 AM md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:51 AM md9 0.00 0.00 0.00 0.00 0.00 362.00 0.00 100.00
11:54:51 AM nvme0n1p1 0.20 0.00 0.80 0.00 4.00 0.00 0.00 0.0011:54:51 AM DEV tps rkB/s wkB/s dkB/s areq-sz aqu-sz await %util
11:54:56 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:56 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:56 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:56 AM loop3 5.00 0.00 56.00 0.00 11.20 0.00 0.20 0.00
11:54:56 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:56 AM nvme0n1 5.40 0.00 59.30 0.00 10.98 0.14 0.11 14.14
11:54:56 AM sdb 58.80 6160.80 184.00 0.00 107.90 0.36 117.16 35.94
11:54:56 AM sdc 22.80 6106.40 24.00 0.00 268.88 0.07 20.24 7.24
11:54:56 AM sdd 15.40 5916.80 0.00 0.00 384.21 0.03 2.68 2.80
11:54:56 AM sdg 32.00 5823.20 72.00 0.00 184.22 0.09 10.93 8.58
11:54:56 AM sdh 25.40 5961.60 19.20 0.00 235.46 0.06 34.20 5.88
11:54:56 AM sdi 15.20 5916.80 0.00 0.00 389.26 0.03 2.53 2.90
11:54:56 AM sdj 37.80 6079.20 65.60 0.00 162.56 0.13 93.25 12.82
11:54:56 AM sde 16.20 5928.80 0.00 0.00 365.98 0.05 4.85 5.18
11:54:56 AM sdf 58.80 6154.40 184.00 0.00 107.80 0.41 135.38 41.06
11:54:56 AM md1 11.60 32.00 65.90 0.00 8.44 0.00 0.00 0.00
11:54:56 AM md2 1.20 12.00 0.00 0.00 10.00 0.00 0.00 0.00
11:54:56 AM md4 5.00 25.60 12.80 0.00 7.68 0.00 0.00 0.00
11:54:56 AM md5 0.80 0.00 25.60 0.00 32.00 0.00 0.00 0.00
11:54:56 AM md6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:56 AM md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:56 AM md9 14.20 0.00 162.40 0.00 11.44 0.00 0.00 0.00
11:54:56 AM md1 2.00 32.00 0.00 0.00 16.00 101.57 1322202.80 100.00
11:54:56 AM md2 1.20 12.00 0.00 0.00 10.00 0.01 12.50 1.50
11:54:56 AM md4 5.60 25.60 32.00 0.00 10.29 8.32 2425.25 100.00
11:54:56 AM md5 1.60 0.00 51.20 0.00 32.00 8.00 14493.62 100.00
11:54:56 AM md6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:56 AM md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:54:56 AM md9 0.00 0.00 0.00 0.00 0.00 346.78 0.00 100.00
11:54:56 AM nvme0n1p1 5.40 0.00 61.80 0.00 11.44 0.00 0.19 0.1011:54:56 AM DEV tps rkB/s wkB/s dkB/s areq-sz aqu-sz await %util
11:55:01 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM loop3 0.20 0.00 0.80 0.00 4.00 0.00 0.00 0.00
11:55:01 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM nvme0n1 6.20 0.00 25.30 0.00 4.08 0.24 0.16 24.18
11:55:01 AM sdb 32.00 4322.40 112.80 0.00 138.60 0.15 77.39 14.82
11:55:01 AM sdc 15.40 4113.60 17.60 0.00 268.26 0.04 28.73 3.76
11:55:01 AM sdd 10.60 4100.80 0.00 0.00 386.87 0.06 7.79 5.98
11:55:01 AM sdg 18.80 4296.00 20.00 0.00 229.57 0.14 68.47 14.30
11:55:01 AM sdh 21.00 4169.60 16.80 0.00 199.35 0.05 16.46 5.14
11:55:01 AM sdi 10.60 4100.80 0.00 0.00 386.87 0.02 2.83 2.30
11:55:01 AM sdj 18.00 4100.80 58.40 0.00 231.07 0.01 1.07 1.34
11:55:01 AM sde 10.60 4100.80 0.00 0.00 386.87 0.02 3.36 2.44
11:55:01 AM sdf 32.40 4164.80 109.60 0.00 131.93 0.42 403.14 42.40
11:55:01 AM md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM md4 6.20 56.00 19.20 0.00 12.13 0.00 0.00 0.00
11:55:01 AM md5 0.40 0.00 12.80 0.00 32.00 0.00 0.00 0.00
11:55:01 AM md6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM md9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM md1 0.00 0.00 0.00 0.00 0.00 29.00 0.00 100.00
11:55:01 AM md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM md4 6.60 56.00 32.00 0.00 13.33 9.00 1915.21 100.00
11:55:01 AM md5 0.80 0.00 25.60 0.00 32.00 8.00 8086.00 100.00
11:55:01 AM md6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:01 AM md9 0.20 0.00 3.20 0.00 16.00 288.68 0.00 100.00
11:55:01 AM nvme0n1p1 6.20 0.00 31.40 0.00 5.06 0.00 0.13 0.0811:55:01 AM DEV tps rkB/s wkB/s dkB/s areq-sz aqu-sz await %util
11:55:06 AM loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:06 AM loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:06 AM loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:06 AM loop3 0.20 0.00 0.80 0.00 4.00 0.00 0.00 0.00
11:55:06 AM sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:06 AM nvme0n1 6.80 0.00 43.10 0.00 6.34 0.08 0.09 7.66
11:55:06 AM sdb 35.60 5771.20 124.80 0.00 165.62 0.25 49.19 25.02
11:55:06 AM sdc 30.40 5803.20 17.60 0.00 191.47 0.16 22.88 16.42
11:55:06 AM sdd 19.00 5752.80 0.00 0.00 302.78 0.06 4.33 6.38
11:55:06 AM sdg 18.80 5699.20 24.00 0.00 304.43 0.03 2.67 2.60
11:55:06 AM sdh 22.20 5741.60 18.40 0.00 259.46 0.08 13.93 8.06
11:55:06 AM sdi 17.80 5738.40 0.00 0.00 322.38 0.05 3.51 5.22
11:55:06 AM sdj 25.60 5734.40 64.00 0.00 226.50 0.04 11.91 4.40
11:55:06 AM sde 14.60 5693.60 0.00 0.00 389.97 0.03 3.36 3.24
11:55:06 AM sdf 34.20 5730.40 124.00 0.00 171.18 0.35 93.43 34.92
11:55:06 AM md1 0.60 0.80 4.10 0.00 8.17 0.00 0.00 0.00
11:55:06 AM md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55:06 AM md4 3.00 29.60 12.80 0.00 14.13 0.00 0.00 0.00
11:55:06 AM md5 11.60 96.80 19.20 0.00 10.00 0.00 0.00 0.00
11:55:06 AM md6 4.60 59.20 0.00 0.00 12.87 0.00 0.00 0.00
11:55:06 AM md7 3.40 44.80 0.00 0.00 13.18 0.00 0.00 0.00
11:55:06 AM md9 0.40 0.80 3.20 0.00 10.00 0.00 0.00 0.00
11:55:06 AM md1 0.80 0.80 5.00 0.00 7.25 7.58 487712.50 36.32
11:55:06 AM md2 0.20 0.00 0.20 0.00 1.00 0.15 0.00 14.76
11:55:06 AM md4 3.60 29.60 32.00 0.00 17.11 8.55 1527.72 100.00
11:55:06 AM md5 12.00 96.80 32.00 0.00 10.73 9.18 750.15 100.00
11:55:06 AM md6 4.60 59.20 0.00 0.00 12.87 0.04 7.61 3.34
11:55:06 AM md7 3.40 44.80 0.00 0.00 13.18 0.03 8.71 2.90
11:55:06 AM md9 0.20 0.80 0.00 0.00 4.00 288.34 34519.00 100.00
11:55:06 AM nvme0n1p1 7.00 0.00 43.40 0.00 6.20 0.00 0.03 0.02Average: DEV tps rkB/s wkB/s dkB/s areq-sz aqu-sz await %util
Average: loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: loop3 1.16 0.00 11.84 0.00 10.21 0.00 0.17 0.00
Average: sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: nvme0n1 3.72 0.00 25.70 0.00 6.91 0.12 0.12 11.69
Average: sdb 35.24 4954.24 123.52 0.00 144.09 0.22 79.93 22.26
Average: sdc 19.72 4903.68 17.28 0.00 249.54 0.09 26.44 9.34
Average: sdd 14.00 4882.40 0.32 0.00 348.77 0.04 4.56 4.06
Average: sdg 20.92 4891.68 36.32 0.00 235.56 0.06 16.50 5.61
Average: sdh 20.04 4906.40 16.32 0.00 245.64 0.05 19.39 4.68
Average: sdi 13.68 4879.20 0.00 0.00 356.67 0.03 2.94 2.99
Average: sdj 23.48 4911.04 53.28 0.00 211.43 0.04 33.12 4.25
Average: sde 13.08 4872.64 0.00 0.00 372.53 0.03 3.66 3.14
Average: sdf 35.48 4947.36 124.16 0.00 142.94 0.35 176.59 34.30
Average: md1 2.44 6.56 14.00 0.00 8.43 0.00 0.00 0.00
Average: md2 0.24 2.40 0.00 0.00 10.00 0.00 0.00 0.00
Average: md4 3.00 22.24 14.08 0.00 12.11 0.00 0.00 0.00
Average: md5 2.68 19.36 15.36 0.00 12.96 0.00 0.00 0.00
Average: md6 1.00 11.84 0.06 0.00 11.90 0.00 0.00 0.00
Average: md7 0.68 8.96 0.00 0.00 13.18 0.00 0.00 0.00
Average: md9 3.48 0.16 41.60 0.00 12.00 0.00 0.00 0.00
Average: md1 0.56 6.56 1.00 0.00 13.50 89.63 1083777.00 87.27
Average: md2 0.28 2.40 0.04 0.00 8.71 0.03 10.71 3.25
Average: md4 3.44 22.24 28.16 0.00 14.65 8.37 3658.01 100.00
Average: md5 3.12 19.36 29.44 0.00 15.64 8.24 3997.51 100.00
Average: md6 1.04 11.84 0.10 0.00 11.48 0.12 119.92 12.44
Average: md7 0.68 8.96 0.00 0.00 13.18 0.01 8.71 0.58
Average: md9 0.08 0.16 0.64 0.00 10.00 329.58 3027542.00 100.00
Average: nvme0n1p1 3.76 0.00 27.48 0.00 7.31 0.00 0.11 0.04 -
Update: Docker rebuild seems not to have solved it. I did notice that my Arq backup to unraid was doing a cleanup task, so I paused that. Since doing that speeds have gone from 1MB to ~20MB, still well below the normal ~100MB average.
-
Hey Jorge,
I dont seem to have the system report binary installed, nor do I see it in nerd pack. Any ideas?
-
I've just deleted my docker.img, reinstalled dockers, performed a clean restart, but the parity issue seems to remain...
-
They seem to be... All are reachable by web.
Any idea how I can I figure out which docker is doing that? (And predictable follow up question): Any suggestions for how to solve it?
EDIT: As I sent this, Sonarr went unavailable and seems to be unable to be stopped by the webgui. I think maybe thats the one...Thanks, Squid!
-
Greetings fellow UnRAIDers, and happy new year!
I'm seeing some very slow parity check speeds (see below) on my monthly parity check today. I did reboot a couple times today after some slowness on the webUI and a docker crash (everything unavailable). The reboot seems to have sorted the docker problem, but now I have a new issue: slow parity speed. It's max/min is between ~25MB/sec and under 1MB/sec, which is far slower than normal.
Enclosed are diagnostics taken during the slow parity check, and a couple screenshots. Any help is very much appreciated. Cheers!
EDIT: Also should mention that I am seeing sustained high CPU usage on the dashboard page (yet not when using htop). Several cores at 100%, overall between 30 and 80% cpu load. Not sure if this is related or not... -
+1, I would love to have a Caddy Docker for UnRaid! Thanks for considering doing that!
-
Quick (and probably obvious) question: How does one update the jitsi components when deployed in this way?
-
FWIW, I would love this. +1 demand unit.
-
52 minutes ago, saarg said:
This has nothing to do with our containers and has to do with something in unraid.
Just curious what this assertion is based on. Could you share some details?
-
Hmm. I'm observing either that this feature is broken, or something is confusing it. I confirmed manually that there are no overlapping files, the parent folders are identically named, and I even copied a single season, rather than the whole lot. No dice, I'm only offered to replace or stop.
EDIT: I totally missed the part where I have to hold option. My mistake! Sorry! It's working now.
Thanks so much for the replies!
-
Unless there's a merge feature in the Finder (there very well may be), the problem is that shows and even portions of seasons are split between the correct and incorrect locations. I tried what you suggested, but I am only offered "replace" or "stop" when I drop the folder.
Any ideas?
-
Thanks for the tip!
One challenge is that there are many subdirectories, so moving them manually will be time consuming (and tedious). I am wondering if there's a sweet one-liner someone could suggest that would move everything up a directory level while preserving the subdirectory structure.
Example:
Moving files from
/mnt/user/TV/TV/Show/Season-06
to
/mnt/user/TV/Show/Season-06 (which may already exist and have some content).
Thanks!!
-
Hi there,
I have noticed recently that my TV directory has a nested TV directory within it that contains a subset of shows and seasons of shows. Obviously, this is not ideal, although Plex seems fine with it.
2 questions:
1. Aside from reviewing my docker paths, what else can I do to determine the cause of this?
2. What is the best way to move these all up one directory (from /mnt/user/TV/TV to /mnt/user/TV) while preserving directory structure and such? I can do this with mc, but I'd prefer a one-liner than spending the time to do it manually.
Thanks!
-
Found the issue: DNS rebinding protection that is provided by Google's DNS servers (default on google wifi). More info here: https://support.google.com/wifi/answer/9144137
I changed to cloudflare's DNS, and I can now use the unraid.net address. Hopefully this helps someone.
-
Sometime last week, following a reboot, I stopped being able to reach my local UnRAID device from its hashed unraid.net address. I am still able to reach it by entering the IP. The error thrown in Chrome is: "server IP address could not be found". I thought I read something about Google Wifi potentially being the culprit (I use that), but now I cant find it. I have a local dns entry that is also not working as a result of this.
Anyone have any ideas? I've cleared cookies and rebooted everything...
Thanks!
-
One final question: is there any reason to not delete the key from memory once the drives are unlocked? I've recently begun formatting and encrypting and I'm trying to understand if there's a scenario where I would want that key to remain in memory.
Thanks!
-
3 hours ago, limetech said:
Not exactly. When you initially decide to use encryption you have to decide, "Am I going to use a passphrase or a file as my encryption/decryption method?".
If you use passphrase, this is a string you have to type correctly each time you reboot your server. The longer the string, the harder it will be for someone to crack. Make it long enough and it's supposedly impossible to crack, but then you have to type it exactly - no fat fingers.
Alternately you can use a file, that is, you can pick a file and upload instead of using a passphrase. The advantage of this approach is that you can use a relatively large file that is filled with random text or even binary data. The file content is what's used as the encryption/decryption key. For example, you can use maybe a random image file, or create one with random data. Of course now you have the problem of keeping a safe copy of that file somewhere.
Regardless of which method you use, unRAID will store the encryption key in a file called "/root/keyfile". (If you use passphrase, we just store the passphrase in this file in plain text just as you entered it. If you upload key file, we save its content in this file.)
Saving the passphrase in /root/keyfile may seem insecure (and it can be), but realize this is RAM and when server powers off, the file is gone. Also, as stated earlier, you can explicitly delete the file once the array has been Started - actually you can delete any time. Perhaps in future we may change code so that every time the array is Started we auto-delete the /root/keyfile - we'll see.
OH! I think I (finally) had the lightbulb moment.
I was thinking the key would remain on Unraid (somewhere on the USB stick); I didn't consider that it would be uploaded from another secured machine each time. This now makes a lot more sense, as does your initial comment about deleting it.
Thank you very much for the clear explanation, I hope it helps more people than just me.
-
Got it, that makes sense. Thanks!
Summarizing (for myself): Using a keyfile would be good for a threat model involving array disks being stolen (and for ease of use), but probably not the best if you're concerned with the entire server being stolen.
EDIT: This assumes the key stays on the Unraid USB stick, which it does not. It is uploaded each time, in place of manually entering a password. limetech clears this up on page 2, but I thought it was worth an edit here to call out my mistake. -
17 minutes ago, limetech said:
Once array is Started there is a button on the bottom of Main that lets you delete the keyfile.
So if I am understanding the flow here (I don't think the lightbulb is on just yet):
- Create passphrase key doc, add it to LUKS
- Reboot, using passphrase doc.
- Start Array
- <Optional, good for security> Delete old keyfile
- <Mandatory if you did #4> Regenerate and re-add keyfile to LUKS before next reboot/power failure, or...? I assume you'd fall back to a password that you'd manually enter...?
-
On 11/16/2017 at 6:54 AM, bonienl said:
You can create a long passphrase and store this in a file, then instead of choosing a passphrase select the file with your stored passphrase.
Doesn't that undo the protection of FDE in the first place, since the key is sitting unencrypted on the box? It seems to me that if your threat model includes physical theft of the server, this might not be a good idea.
Honest question; apologies if I am missing something obvious.
-
The repair appears to have worked, and the array is now able to start. I don't see any data loss and only 2 files in the lost+found folder.
Many thanks!
[Support] Linuxserver.io - SmokePing
in Docker Containers
Posted · Edited by mattyx
Changing quote to code for readability.
Hi,
I've had smokeping running for several months, but something seems to have broken recently, and no pings are hitting the graphs (historical data is still visible).
The logs show:
ERROR: we did not get config from the master. Maybe we are not configured as a slave for any of the targets on the master ? WARNING: No secret found for slave 8090ceb9b10d
I dont recall ever messing with a master/slave config, so I'm at a loss. Anyone have any pointers for where I should investigate?
Many thanks!