Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/20/19 in Reports

  1. 5 points
    Since I can remember Unraid has never been great at simultaneous array disk performance, but it was pretty acceptable, since v6.7 there have been various users complaining for example of very poor performance when running the mover and trying to stream a movie. I noticed this myself yesterday when I couldn't even start watching an SD video using Kodi just because there were writes going on to a different array disk, and this server doesn't even have a parity drive, so did a quick test on my test server and the problem is easily reproducible and started with the first v6.7 release candidate, rc1. How to reproduce: -Server just needs 2 assigned array data devices (no parity needed, but same happens with parity) and one cache device, no encryption, all devices are btrfs formatted -Used cp to copy a few video files from cache to disk2 -While cp is going on tried to stream a movie from disk1, took a long time to start and would keep stalling/buffering Tried to copy one file from disk1 (still while cp is going one on disk2), with V6.6.7: with v6.7rc1: A few times transfer will go higher for a couple of seconds but most times it's at a few KB/s or completely stalled. Also tried with all unencrypted xfs formatted devices and it was the same: Server where problem was detected and test server have no hardware in common, one is based on X11 Supermicro board, test server is X9 series, server using HDDs, test server using SSDs so very unlikely to be hardware related.
  2. 1 point
    So I appear to be having a problem with dockers, Specifically Linuxserver ones, but they said to me it is an unRAID issue and it is "not just us." I chatted with someone from Linuxserver in private and they said it is an issue with "Update all containers." The dockers will say there is an update ready, but when updated, it does not do anything. Tried manually updating a docker, same result. gibson-diagnostics-20190829-1841.zip
  3. 1 point
    Likely related to this bug but this one is more serious, any new multi device pools created on v6.7+ will be created with raid1 profile for data but single (or DUP if HDDs are used) profile for metadata, so if one of the devices fails pool will be toast.
  4. 1 point
    when you change the setting Tunable (enable NCQ) to yes in disk setup and click apply, and it will do nothing to enable NCQ, I have to manually change the queue_depth to 31 in CLI startArray=yes&spindownDelay=0&spinupGroups=no&defaultFormat=2&defaultFsType=xfs&shutdownTimeout=90&poll_attributes=600&queueDepth=0&nr_requests=128&md_num_stripes=4096&md_sync_window=2048&md_sync_thresh=2000&md_write_method=0&changeDisk=Apply&csrf_token=**************** Aug 22 17:09:51 Tower emhttpd: shcmd (9335): echo 128 > /sys/block/sdf/queue/nr_requests Aug 22 17:09:51 Tower emhttpd: shcmd (9336): echo 128 > /sys/block/sde/queue/nr_requests Aug 22 17:09:51 Tower emhttpd: shcmd (9337): echo 128 > /sys/block/sdb/queue/nr_requests Aug 22 17:09:51 Tower emhttpd: shcmd (9338): echo 128 > /sys/block/sdd/queue/nr_requests Aug 22 17:09:52 Tower emhttpd: shcmd (9339): echo 128 > /sys/block/sdc/queue/nr_requests Aug 22 17:09:52 Tower kernel: mdcmd (95): set md_num_stripes 4096 Aug 22 17:09:52 Tower kernel: mdcmd (96): set md_sync_window 2048 Aug 22 17:09:52 Tower kernel: mdcmd (97): set md_sync_thresh 2000 Aug 22 17:09:52 Tower kernel: mdcmd (98): set md_write_method 0 Aug 22 17:09:52 Tower kernel: mdcmd (99): set spinup_group 0 0 Aug 22 17:09:52 Tower kernel: mdcmd (100): set spinup_group 1 0 Aug 22 17:09:52 Tower kernel: mdcmd (101): set spinup_group 2 0 Aug 22 17:09:52 Tower kernel: mdcmd (102): set spinup_group 3 0 Aug 22 17:09:52 Tower kernel: mdcmd (103): set spinup_group 4 0