Jump to content

Redspeed93

Members
  • Posts

    16
  • Joined

  • Last visited

Posts posted by Redspeed93

  1. Some time ago I expanded my array with 2 more drives - and now I've just realized these are for some reason not encrypted like the rest of the pool. How do I get them encrypted as the rest of the pool and how do I avoid this happening next time I add new disks?


    image.thumb.png.e987181c06d901ae7577a86c50c5e874.png

  2. Can't write anything to influxdb - keep getting 204...

     

    [httpd] 192.168.1.107, 192.168.1.107,172.17.0.1 - root [31/Oct/2021:15:41:27 +0100] "POST /query?db=varken&epoch=ms HTTP/1.1" 200 57 "-" "Grafana/8.2.2" a0df45ab-3a58-11ec-813c-0242ac110006 369
    ts=2021-10-31T14:41:27.218265Z lvl=info msg="Executing query" log_id=0XXhEm6W000 service=query query="SELECT count(distinct(hash)) FROM varken.\"varken 30d-1h\".Tautulli WHERE time >= 1635504903167ms AND time <= 1635677703167ms GROUP BY time(10m)"
    [httpd] 192.168.1.107, 192.168.1.107,172.17.0.1 - root [31/Oct/2021:15:41:27 +0100] "POST /query?db=varken&epoch=ms HTTP/1.1" 200 57 "-" "Grafana/8.2.2" a0df8b9f-3a58-11ec-813d-0242ac110006 356
    [httpd] 192.168.1.100 - - [31/Oct/2021:15:41:32 +0100] "POST /write?db=telegraf HTTP/1.1" 204 0 "-" "Telegraf/1.20.2 Go/1.17" a3e07f00-3a58-11ec-813e-0242ac110006 31861
    [httpd] 172.17.0.1 - root [31/Oct/2021:15:41:37 +0100] "POST /write?db=varken HTTP/1.1" 204 0 "-" "python-requests/2.21.0" a7101c85-3a58-11ec-813f-0242ac110006 25664
    [httpd] 192.168.1.100 - - [31/Oct/2021:15:41:42 +0100] "POST /write?db=telegraf HTTP/1.1" 204 0 "-" "Telegraf/1.20.2 Go/1.17" a9d669d8-3a58-11ec-8140-0242ac110006 35903
    [httpd] 192.168.1.100 - - [31/Oct/2021:15:41:42 +0100] "POST /write?db=telegraf HTTP/1.1" 204 0 "-" "Telegraf/1.20.2 Go/1.17" a9d669d8-3a58-11ec-8140-0242ac110006 35903

     

    Any Ideas?

  3. I upgraded to 6.9.0 a few days ago and wanted to re-allign my cache to see if that boosted performance.

    So, to do that I started by moving everything from cache onto the array, which worked flawlessly. And when the cache had been 1MiB-aligned I told the mover to get everything back on the cache. Which also worked flawlessly - except for 1 share that just refuses to GET OFF MY DAMN LAWN ARRAY!

     

    Diagnostics and image showing the problematic share included.

     

     

     

    shares.png

    vault-diagnostics-20210305-1658.zip

  4. 4 hours ago, johnnie.black said:

    Small files are not ideal to test speed, is speed normal with large files? You can also try the new beta, it aligns SSDs on the 1MiB boundary (requires reformatting the pool) and that usually results in better performance.

     

    Example of a transfer of 3 large files totaling about 15GB, destinations is a raid5 pool with 5 cheap 120GB SSDs:

     

    980707407_Screenshot2020-07-2217_44_37.png.7466097a3bdec51104f00a968f3d1b00.png

     

     

    I would disagree completely. If the point of the cache was simply to receive large files I might as well not have a cache or just use HDDs as cache. The entire point of having a cache and using SSDs, at least for me, is to avoid the slow write speed that I would otherwise encounter with HDDs when writing many smaller files.

     

    While I don't have any data I also feel like the write speeds now, after having added 2 more SSDs to the cache, is basically identical to what it was previously when only 2 SSDs were used in RAID 1, so that adds further to my belief that there is either a configuration error or a bug.

  5. Previously I had 2 SATA SSD's in BTRFS RAID 1 as my cache and performance was very slow so I decided to add 2 more SSDs and change to BTRFS RAID 5, however write speeds still seem to be abysmal.

     

    To confirm that I didn't just have unrealistic expectations I tried writing an identical folder to first my Unraid cache and then a Windows 10 PC with a singular budget NVME SSD.

     

    It took 65 second to write the folder to the Unraid Raid 5 cache and 37 seconds to write the folder to the Windows 10 PC with the lone budget NVME SSD.

     

    Looking at the transfers themselves Windows reports that the part of the transfer dealing with thousands of smaller files the Unraid cache slows down to a grinding halt with transfer speeds of just a few hundred kilbytes per second, while the cheapo NVME SSD manages to stay above at least 1 MB/s at all times.

    1689572527_writingtoBTRFSRAID5cache.png.6518886f42168f45267edb04983ff199.png1368848288_writingtocheapNVMESSDonW10.png.90560e9e229a68100522e85530072ec3.png

     

    I know Windows transfer window is just a guesstimate but giving the total transfer time it seems to be accurate.

     

    Surely 4 SATA SSD's working together in RAID 5 can't be H A L F the speed of a cheap, low capacity NVME SSD? Surely...?

    vault-diagnostics-20200725-2338.zip

  6. I have 2 docker containers that says "update ready" but are unable to update (i.e. the update process does what it normally does, but then still says "update ready" when done). For binhex-delugevpn this container is still working despite this, however my PlexMediaServer seems to be completely unreachable after attempting to update.

    Also, after these failed updates, when I go to the "Docker" tab in Unraid it literally takes a minute or more to load the page, despite normally taking a fraction of second.

     

    Any help fixing this issue or worst case restoring from my appdata backup would be appreciated!

    vault-diagnostics-20200520-1248.zip

  7. Hi,

     

    My server has crashed about once every 1-2 days ever since I got it up and running. By crashed I mean that it continues to be powered on, but its not response to any connection requests of any kind, neither is my network equipment reading the server as being connected to the network.

     

    I see no errors in the log when the server should have crashed. Actually I see no entries of note at all.

     

    The following is an example from a log:

     

    Jan 22 06:45:16 Vault webGUI: Successful login user root from 192.168.1.10
    Jan 22 06:50:58 Vault nginx: 2020/01/22 06:50:58 [error] 3704#3704: *445774 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.10, server: , request: "GET /webGui/include/ShareList.php?compute=yes&path=Shares&scale=-1&number=.%2C&fill=ssz HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "vault", referrer: "https://vault/Shares"
    Jan 22 07:24:56 Vault kernel: mdcmd (97): spindown 2
    Jan 22 07:30:08 Vault kernel: mdcmd (98): spindown 3
    Jan 22 10:19:18 Vault kernel: mdcmd (99): spindown 2
    Jan 22 10:19:19 Vault kernel: mdcmd (100): spindown 3
    Jan 22 11:26:43 Vault kernel: mdcmd (101): spindown 3
    Jan 22 12:33:46 Vault kernel: mdcmd (102): spindown 3
    Jan 22 13:03:22 Vault kernel: mdcmd (103): spindown 2
    Jan 22 13:13:35 Vault kernel: mdcmd (104): spindown 3
    Jan 22 16:39:06 Vault kernel: mdcmd (105): spindown 0
    Jan 22 16:39:07 Vault kernel: mdcmd (106): spindown 1
    Jan 22 19:48:47 Vault kernel: Linux version 4.19.94-Unraid (root@Develop) (gcc version 9.2.0 (GCC)) #1 SMP Thu Jan 9 08:20:36 PST 2020

     

    The server has crashed somewhere between 10:00:00, where I know there was a succesfull Plex connection, and 19:48:47 when I forced the server to reboot. The server is connected to a UPS and all hardware was pretested before being put to use in the system...

     

    Anyone have any idea why my server keeps dying?

×
×
  • Create New...