Redspeed93

Members
  • Posts

    16
  • Joined

  • Last visited

Redspeed93's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Some time ago I expanded my array with 2 more drives - and now I've just realized these are for some reason not encrypted like the rest of the pool. How do I get them encrypted as the rest of the pool and how do I avoid this happening next time I add new disks?
  2. Can't write anything to influxdb - keep getting 204... [httpd] 192.168.1.107, 192.168.1.107,172.17.0.1 - root [31/Oct/2021:15:41:27 +0100] "POST /query?db=varken&epoch=ms HTTP/1.1" 200 57 "-" "Grafana/8.2.2" a0df45ab-3a58-11ec-813c-0242ac110006 369 ts=2021-10-31T14:41:27.218265Z lvl=info msg="Executing query" log_id=0XXhEm6W000 service=query query="SELECT count(distinct(hash)) FROM varken.\"varken 30d-1h\".Tautulli WHERE time >= 1635504903167ms AND time <= 1635677703167ms GROUP BY time(10m)" [httpd] 192.168.1.107, 192.168.1.107,172.17.0.1 - root [31/Oct/2021:15:41:27 +0100] "POST /query?db=varken&epoch=ms HTTP/1.1" 200 57 "-" "Grafana/8.2.2" a0df8b9f-3a58-11ec-813d-0242ac110006 356 [httpd] 192.168.1.100 - - [31/Oct/2021:15:41:32 +0100] "POST /write?db=telegraf HTTP/1.1" 204 0 "-" "Telegraf/1.20.2 Go/1.17" a3e07f00-3a58-11ec-813e-0242ac110006 31861 [httpd] 172.17.0.1 - root [31/Oct/2021:15:41:37 +0100] "POST /write?db=varken HTTP/1.1" 204 0 "-" "python-requests/2.21.0" a7101c85-3a58-11ec-813f-0242ac110006 25664 [httpd] 192.168.1.100 - - [31/Oct/2021:15:41:42 +0100] "POST /write?db=telegraf HTTP/1.1" 204 0 "-" "Telegraf/1.20.2 Go/1.17" a9d669d8-3a58-11ec-8140-0242ac110006 35903 [httpd] 192.168.1.100 - - [31/Oct/2021:15:41:42 +0100] "POST /write?db=telegraf HTTP/1.1" 204 0 "-" "Telegraf/1.20.2 Go/1.17" a9d669d8-3a58-11ec-8140-0242ac110006 35903 Any Ideas?
  3. 2 x LSI SAS-3 9300-8i + 1 x LSI SAS-2 9211-8i
  4. No dice with -y or -Y for hdparm. Even though 12/14 drives are SATA (connected through HBA's).
  5. hdparm sleep/standby does nothing to my disks. Looking at the log when manually spinning down a disk via gui simply shows "emhttpd: spinning down /dev/sd*" - but what command is used to do this spindown?
  6. No fix for the SAS spindown issue... *sigh*
  7. I upgraded to 6.9.0 a few days ago and wanted to re-allign my cache to see if that boosted performance. So, to do that I started by moving everything from cache onto the array, which worked flawlessly. And when the cache had been 1MiB-aligned I told the mover to get everything back on the cache. Which also worked flawlessly - except for 1 share that just refuses to GET OFF MY DAMN LAWN ARRAY! Diagnostics and image showing the problematic share included. vault-diagnostics-20210305-1658.zip
  8. I would disagree completely. If the point of the cache was simply to receive large files I might as well not have a cache or just use HDDs as cache. The entire point of having a cache and using SSDs, at least for me, is to avoid the slow write speed that I would otherwise encounter with HDDs when writing many smaller files. While I don't have any data I also feel like the write speeds now, after having added 2 more SSDs to the cache, is basically identical to what it was previously when only 2 SSDs were used in RAID 1, so that adds further to my belief that there is either a configuration error or a bug.
  9. Previously I had 2 SATA SSD's in BTRFS RAID 1 as my cache and performance was very slow so I decided to add 2 more SSDs and change to BTRFS RAID 5, however write speeds still seem to be abysmal. To confirm that I didn't just have unrealistic expectations I tried writing an identical folder to first my Unraid cache and then a Windows 10 PC with a singular budget NVME SSD. It took 65 second to write the folder to the Unraid Raid 5 cache and 37 seconds to write the folder to the Windows 10 PC with the lone budget NVME SSD. Looking at the transfers themselves Windows reports that the part of the transfer dealing with thousands of smaller files the Unraid cache slows down to a grinding halt with transfer speeds of just a few hundred kilbytes per second, while the cheapo NVME SSD manages to stay above at least 1 MB/s at all times. I know Windows transfer window is just a guesstimate but giving the total transfer time it seems to be accurate. Surely 4 SATA SSD's working together in RAID 5 can't be H A L F the speed of a cheap, low capacity NVME SSD? Surely...? vault-diagnostics-20200725-2338.zip
  10. So RAID5 is actually working properly but UNRAID is just visually not showing the correct amount of free space?
  11. Previously my cache setup was 2 x 1TB SSD's in RAID1 using BTRFS. I then added 2 additional 1TB SSDs and switched to RAID5. However, my capacity is now 4TB and not 3TB as I would hope to see when 1TB should be lost to parity. How do I get to "properly" use RAID5 with parity? Or is UNRAID just not capable of doing so? vault-diagnostics-20200706-1833.zip
  12. I've recently added 2 new disks to my array and was quite surprised to see these new disk actually spin down after the 30 minutes of delay I've chosen because every other disk has bascially always ignored this, so I thought this feature was just broken. All disks always stay down if I manually spin them down. vault-diagnostics-20200605-2245.zip
  13. I have 2 docker containers that says "update ready" but are unable to update (i.e. the update process does what it normally does, but then still says "update ready" when done). For binhex-delugevpn this container is still working despite this, however my PlexMediaServer seems to be completely unreachable after attempting to update. Also, after these failed updates, when I go to the "Docker" tab in Unraid it literally takes a minute or more to load the page, despite normally taking a fraction of second. Any help fixing this issue or worst case restoring from my appdata backup would be appreciated! vault-diagnostics-20200520-1248.zip
  14. I let it run for a few days after changing the C-states and now it runs without hickups
  15. Yep, Ryzen 1700. Diagnostics attached. vault-diagnostics-20200122-2039.zip