Leaderboard

Popular Content

Showing content with the highest reputation since 06/16/20 in Reports

  1. Welcome (again) to 6.9 release development! This release marks hopefully the last beta before moving to -rc phase. The reason we still mark beta is because we'd like to get wider testing of new multiple-pool feature, as well as perhaps sneak in a couple more refinements. With that in mind, the obligatory disclaimer: Important: Beta code is not fully tested and not feature-complete. We recommend running on test servers only! That said, here's what's new in this release... Multiple Pools This features permits you to define up to 35 named po
    30 points
  2. As always, prior to updating, create a backup of your USB flash device: "Main/Flash/Flash Device Settings" - click "Flash Backup". Besides bug fixing, most of the work in this release is related to upgrading to the Linux 5.9 kernel where, due to kernel API changes, it has become necessary to move device spin-up/down and spin-up group handling out of the md/unraid driver and have it handled entirely in user space. This also let us fix an issue where device spin-up of devices in user-defined pools was executed serially instead of in parallel. We should also now be able
    22 points
  3. New in this release: GPU Driver Integration Unraid OS now includes selected in-tree GPU drivers: ast (Aspeed), i915 (Intel), amdgpu and radeon (AMD). These drivers are blacklisted by default via 'conf' files in /etc/modprobe.d: /etc/modprobe.d/ast.conf /etc/modprobe.d/amdgpu.conf /etc/modprobe.d/i915.conf /etc/modprobe.d/radeon.conf Each of these files has a single line which blacklists the driver, preventing it from being loaded by the Linux kernel. However it is possible to override the settings in these files by creating the directory 'config/modprobe.d' on
    15 points
  4. As always, prior to updating, create a backup of your USB flash device: "Main/Flash/Flash Device Settings" - click "Flash Backup". Hopefully spin-up/down sorted: External code (docker containers) using 'smartctl -n standby' should work ok with SATA drives. This will remain problematic for SAS until/unless smartmontools v7.2 is released with support for '-n standby' with SAS. SMART is unconditionally enabled on devices upon boot. This solves problem where some newly installed devices may not have SMART enabled. Unassigned devices will get spun-down according t
    12 points
  5. Back in the saddle ... Sorry for the long delay in publishing this release. Aside from including some delicate coding, this release was delayed due to several team members, chiefly myself, having to deal with various non-work-related challenges which greatly slowed the pace of development. That said, there is quite a bit in this release, LimeTech is growing and we have many exciting features in the pipe - more on that in the weeks to come. Thanks to everyone for their help and patience during this time. Cheers, -Tom IMPORTANT: This is Beta software. We recommend runni
    12 points
  6. Changes vs. 6.9.0-beta29 include: Added workaround for mpt3sas not recognizing devices with certain LSI chipsets. We created this file: /etc/modprobe.d/mpt3sas-workaround.conf which contains this line: options mpt3sas max_queue_depth=10000 When the mpt3sas module is loaded at boot, that option will be specified. If you add "mpt3sas.max_queue_depth=10000" to syslinux kernel append line, you can remove it. Likewise, if you manually load the module via 'go' file, can also remove it. When/if the mpt3sas maintainer fixes the core issue in the driver we'll get rid of thi
    10 points
  7. 6.9.0-beta24 vs. -beta22 Summary: fixed several bugs added some out-of-tree drivers added ability to use xfs-formatted loopbacks or not use loopback at all for docker image layers. Refer to Docker section below for more details (-beta23 was an internal release) Important: Beta code is not fully tested and not feature-complete. We recommend running on test servers only! Multiple Pools This features permits you to define up to 35 named pools, of up to 30 storage devices/pool. The current "cache pool" is now simply a pool named "cache".
    9 points
  8. 6.9.0-beta25 vs. -beta24 Summary: fixed emhttpd crash resulting from having NFS exported disk shares fixed issue where specifying 1 MiB partition alignment was being ignored (see 1 MiB Partition Alignment below) fixed spin-up/down issues ssh improvements (see SSH Improvements below) kernel updated from 5.7.7 to 5.7.8 added UI changes to support new docker image file handling - thank you @bonienl. Refer also to additional information re: docker image folder, provided by @Squid under Docker below. known issue: "Device/SMART Settings/SMART controller t
    5 points
  9. The ui is broken on the dashboard for docker & VM's on chrome browser. Does nothing when you click on icon's and it's only showing 1 vm of the 3 i have running. All fine if you go into docker tab and vm tab all works as it should.
    4 points
  10. EDIT (March 9th 2021): Solved in 6.9 and up. Reformatting the cache to new partition alignment and hosting docker directly on a cache-only directory brought writes down to a bare minimum. ### Hey Guys, First of all, I know that you're all very busy on getting version 6.8 out there, something I'm very much waiting on as well. I'm seeing great progress, so thanks so much for that! Furthermore I won't be expecting this to be on top of the priority list, but I'm hoping someone of the developers team is willing to invest (perhaps after the release).
    3 points
  11. @Valerio found this out first, but never received an answer. Today I found it out, too. But this is present since 2019 (or even longer). I would say it's a bug, as: it prevents HDD/SSD spindown/sleep (depending on the location of docker.img) it wears out the SSD in the long run (if docker.img is located here) - see this bug, too. it prevents reaching CPU's deep sleep states What happens: /var/lib/docker/containers/*/hostconfig.json is updated every 5 seconds with the same content /var/lib/docker/containers/*/config.v2.json is updated e
    3 points
  12. After updating to 6.9 Final the HDD's (sata) no longer go into standby (after 30 min), no spin down. I also set the delay to 15 minutes but the HDD's just don't go into standby. I have not changed any system settings before, only when I tried to solve the problem (uninstalled plugins etc.). Before the update, on 6.8.3, the spin down worked fine. I hope that you can help.
    2 points
  13. create a test directory in /mnt/user/Downloads root@MediaStore:/mnt/user/Downloads# ls -al test total 0 drwx------ 1 root root 0 Jan 20 23:33 ./ drwxrws--- 1 nobody users 205274 Jan 20 23:33 ../ root@MediaStore:/mnt/user/Downloads# ls -ld /mnt/{cache,user}/Downloads drwxrws--- 1 nobody users 205274 Jan 20 23:33 /mnt/cache/Downloads/ drwxrws--- 1 nobody users 205274 Jan 20 23:33 /mnt/user/Downloads/ when this directory is mounted in a container like so root@MediaStore:~# docker run --rm --name box -d -v /mnt/cache/Downloads:/media alpine sleep 3600 131ed3b6357ba8253513
    2 points
  14. If an extended SMART self test takes longer than the configured spin-down delay for the array disk, then the disk spins down and the self test is aborted. This behaviour is different from previous versions, where spin-down was temporarily suspended until after the self-test had completed. The message "SMART self test in progress" appears but the spin-down prevention doesn't operate. pusok-diagnostics-20210120-0317.zip (I know the unassigned Hitachi disk is totally shot!)
    2 points
  15. Maybe an oversight by Apple or maybe intentional. NAS is my unraid server. The closest supported server type I found was macpro-2019-rackmount so I customize the smb.service with the following script: cp -u /etc/avahi/services/smb.service /etc/avahi/services/smb.service.disabled cp /boot/extras/avahi/smb.service /etc/avahi/services/ chmod 644 /etc/avahi/services/smb.service touch /etc/avahi/services/smb.service.disabled Where /boot/extras/avahi/smb.service looks like: <?xml version='1.0' standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group S
    2 points
  16. When I try to use Veeam as a backup target using NFS, I always have to reboot my Veeam Backup server before starting the job or I will have errors about stale file handles in the Veeam logs and the job failes. The options provided on the forum with the tunables didn't work. The best possible solution would be to implement a newer version of NFS (4.x) so we can finally get rid of all these old time errors for multiple users. I'm running beta25 at the moment (latest at the time of posting)
    2 points
  17. This is more a request than a bug report, newer kernels support btrfs raid1c3 (3 copies) and raid1c4 (4 copies), currently a pool converted to raid5 or raid6 by using the GUI options will use raid1 for metadata (and rightly so since it's not recommended to use raid5/6 for metadata), the problem for a raid6 pool is that redundancy won't be the same for data and metadata as warned in the log: kernel: BTRFS warning (device sdb1): balance: metadata profile raid1 has lower redundancy than data profile raid6 i.e., data pool data chunks can support two missing devices but metadat
    2 points
  18. tl;dr: It appears to me that Unraid 6.9.2 doesn't honor device-specific temperature notification settings for Unassigned Devices for a straightforward reason that is easily fixed. Now that I have two unassigned NVME drives in my Unraid server, the annoyance of over-temp notifications that ignore the per-device settings has doubled, so I've come up with what is hopefully a true fix, rather than a workaround, in the form of a small change to the check_temp function in /usr/local/emhttp/plugins/dynamix/scripts/monitor. Here's the diff for /usr/local/emhttp/plugins/dynamix/
    1 point
  19. Just noticed that on the Dashboard one cpu at a time goes to 100%. Running top shows wsdd taking 100% (I am assuming that means 100% on one processor). It jumps around to both regular and HT processors, all of which are not pinned to a VM. It does not seem to be impacting any function or performance; I have not seen this before upgrading to 6.8.0. I do not see any issues in the system log.
    1 point
  20. If I set these values and hit Apply the values are set back to the default values: Warning disk temperature threshold Critical disk temperature threshold The disks in question are M.2 NVMe devices running in single disk XFS pools. I tried to set these values to 60/65 celsius. After hitting Apply these values go back to 45/55 celsius.
    1 point
  21. I don't search often so I can't say for certain it's the beta but it used to work consistently and now it doesn't, even in safe mode. No results returned no matter how long I wait. I'm running the latest MacOS Catalina. I also tested in MacOS Mojave (unraid VM), same result. I have a Raspberry Pi shared over SMB where search from the same two clients works fine. Diagnostics from safe mode attached. nas-diagnostics-20201027-1319.zip
    1 point
  22. As a long time Unraid user (over a decade now, and loving it!), I rarely have issues (glossing right over those Ryzen teething issues). It is with that perspective that I want to report that there are major issues with 6.9.2. I'd been hanging on to 6.8.3, avoiding the 6.9.x series as the bug reports seemed scary. I read up on 6.9.2 and finally decided that with two dot.dot patches it was time to try it. My main concern was that my two 8 TB Seagate Ironwolf drives might experience this issue: I had a series of unfortunate events that makes it extremely diffic
    1 point
  23. This isn't specifically a UnRAID problem but I'm putting it here for visibility and awareness, as UnRAID v6.9.2 is affected by this bug. I already commented about the problem over here: UNRAID 6.9.2 - DOCKER CONTAINER NOT REACHABLE OVER THE INTERNET WITH IPV6 There is a problem in the networking engine of Docker when using IPv6 with a container that has only a IPv4 assigned in a bridged network. Prior to Docker version 20.10.2 IPv6 traffic was forwarded to the container regardless. This behavior changed with version 20.10.2. This is the pull request that changed this behavio
    1 point
  24. Hi, Went to turn on the TV for my kid this morning and no plex, go check my shares and they aren't there. Pulled Diagnositics and did a reboot through the button in the upper right hand corner (Dark Theme) and when the system came back up it whined about an unclean shutdown. Pulled a second set up diags. I had not touched the array since yesterday morning. This is not the first time share dropped out from under me on a version of 6.9, but other people had reported it, so I left it alone at the time. This occurred while I was out of town for the evening and unable
    1 point
  25. If an extended SMART self-test takes longer than the configured spin-down delay for the array disk, then the disk spins down and the self test is aborted. This behaviour is different from Unraid 6.8, where spin-down was temporarily suspended until after the self-test had completed. The message "SMART self test in progress" appears but the spin-down prevention doesn't operate. Previously noticed affecting 6.9.0-rc2 and reported here:
    1 point
  26. Ever since I upgraded from 6.8.3 to 6.9.1 if I shutdown the server with the /usr/local/sbin/powerdown or reboot the server with /usr/local/sbin/powerdown -r when the system reboots it always performs a parity check like it didn't cleanly shutdown. Also, my Parity check average is 66 MB/s instead of 116 MB/s like it was on 6.8.3.
    1 point
  27. I noticed that after upgrading from 6.8.3 to 6.9.1 that one of my docker containers (which run's on the host) lost its connectivity to a docker container which runs on a user defined network with its own IP. I could not understand why, as all the settings - including the "Host Access to custom networks" option, were checked. After a little playing, I stopped the array, turned the option off - saved. Then turned the option on - saved. Started the array and communication between the docker containers was restored. I am not sure if I can replicate this now that it has been
    1 point
  28. I have rebooted twice since upgrading to 6.9.0. The first time was to upgrade to 6.9.1 and again to correct my vm's video card going wonky. Both times a parity check was started upon boot. What is going on? unraid-diagnostics-20210312-1736.zip
    1 point
  29. Hello, I would like to make a docker container of mine also available via IPv6 to the outside world. For that I need a my unraid system to have a static IPv6 suffix since my prefix is assigned from my provider. In my router I then need to set the IPv6 suffix for which the ports should be opened. Normally that suffix should be fixed on a server like this but it isn't since everytime I reboot or reconfigure unraids network settings a new IPv6 address is assigned to my unraid system. Normally this should only be the case if the IPv6 privacy extension is enabled oth
    1 point
  30. Hello, This is the first time installing UNRAID. I am attempting to get the UEFI mode only to work. The local GUI does not show up it shows a blinking cursor. This is applicable to both 6.9 & 6.9.1, I have tried with both and it does the same. I have tried Safe mode GUI no plugins and still doesn't work. I have not installed any plugins as I just installed it on to a Flash Drive. My BIOS is Asus version 3405 and American MegaTrends. I only have AMD Graphics cards. I know I am not the only one with this issue as a previous thread is covering some of
    1 point
  31. For reference, 6.9 rc2, admittedly on different server
    1 point
  32. Likely related to this bug but this one is more serious, any new multi device pools created on v6.7+ will be created with raid1 profile for data but single (or DUP if HDDs are used) profile for metadata, so if one of the devices fails pool will be toast.
    1 point
  33. Hi, I noticed that SMART attribute 22 is reported as "Unknown attribute" on recent WDC_WD120EDAZ hard drives (Western Digital): smartctl -a /dev/sdl smartctl 7.1 2019-12-30 r5022 [x86_64-linux-4.19.107-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD120EDAZ-11F3RA0 Serial Number: REDACTED LU WWN Device Id: 5 000cca REDACTED Firmware Version: 81.00A81 User Capacity: 12,000,138,625,024 bytes [12.0 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical
    1 point
  34. unRaid version=6.8.3; No cache drives/pool; no SSDs all HDDs, 10 drives + parity drive I have been an unraid user since about 2010. Recently I noticed I could not spin down one of my disks and the parity disk. In the past I was sure I was able to spin all disks down and they stayed down until accessed. To be precise they do spin down (using capabilities of the main gui page either by disk or using "spin Down" button) however they spin back up after a few seconds. The data disk and the parity disk are getting writes every few seconds. I had recently turned on docker and installed a
    1 point
  35. I get the following when trying to join an AD on beta 25 Jul 14 13:32:50 Storage smbd[3950]: [2020/07/14 13:32:50.110989, 0] ../../source3/auth/auth_util.c:1397(make_new_session_info_guest) Jul 14 13:32:50 Storage smbd[3950]: create_local_token failed: NT_STATUS_INVALID_PARAMETER_MIX Jul 14 13:32:50 Storage smbd[3950]: [2020/07/14 13:32:50.111027, 0] ../../source3/smbd/server.c:2042(main) Jul 14 13:32:50 Storage smbd[3950]: ERROR: failed to setup guest info. Jul 14 13:32:50 Storage nmbd[3955]: [2020/07/14 13:32:50.122663, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Jul 14 1
    1 point
  36. Upgrading from 6.8.3 to 6.9.0 RC1 breaks Active Directory integration. Attempting to Join AD from the GUI after the upgrade results in an unjoined state. Attempting to Join AD from the CLI results in similar errors to this post. Domain Name: homeoffice.local NETBIOS Name: HOMEOFFICE Windows Server 2012 R2 AD Domain I'm attaching two support files for reference. The first support file was captured after upgrading to 6.9.0 RC1. I captured it shortly after attempting to Join AD. That filename is 'excelsior-diagnostics-20201211-0719.zip'.
    1 point
  37. 6.9.0-beta35 - SAMBA Issues Anyone else having issues with SAMBA since upgrading? Also about 100000000000 errors in event log it keeps looping these errors in systemlog until what seems like ram is either full and the server dies. or you end samba service in ssh which stops it. using windows 2019 AD, Tried with two profiles which both have permission to join the domain.
    1 point
  38. Update: Slow parity check is a symptom of a 5.8 Kernel Bug where pre-Skylake CPUs get stuck at the minimum pstate MHz (often 800MHz). Scroll down in this thread for links to the associated Kernel bug and discussion. Otherwise, this link will jump you to my updated post in this thread. I've been noticing that 6.9.0 has been slower than 6.8.3 for disk access when using dockers (also the high amount of disk reads on my cache even after making them 1MB aligned, but that's another issue). I did my first parity check and it's markedly slower than historical. Normally
    1 point
  39. So everyone not sure if this is a bug but I thought i'd post it. I increased a vdisk by mistake & like normal I was going to use command line to resize the vdisk. I get this this error now.... /mnt/user/domains/Windows Server 2019# qemu-img resize vdisk2.img 30G WARNING: Image format was not specified for 'vdisk2.img' and probing guessed raw. Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted. Specify the 'raw' format explicitly to remove the restrictions. qemu-img: Use the --shrink option to
    1 point
  40. https://gitlab.com/libvirt/libvirt/-/issues/31 Issue described in link above. Has been patched, fix should be in the 6.5.0 release. (For VM Backup plugin users, i don't use the plugin but this is likely your issue.)
    1 point
  41. Bug: Unraid is doing a full parity sync when asked to do a read-only parity check. Confirmed and reproduced on unraid 6.8.3. Reported here: Somebody said that it is still present in this beta. (I have not tested that)
    1 point
  42. Mechanical SATA hard drives assigned to pools don't automatically spin down. I can spin them down manually using the down arrow icon for that pool and they then stay spun down, as expected, but the Main and Dashboard pages continue to show them as active. More information/screen-grabs here: https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta22-available-r955/page/8/?tab=comments#comment-9704
    1 point
  43. Just noticed a drive in my system that's apparently going into nuclear meltdown. noah-diagnostics-20200717-2159.zip
    1 point
  44. Hi there, i upgraded from Unraid v6.8.3 to v6.9.0-beta22 for testing. After that upgrade all my Shares disappeard in GUI under Shares. In the Server log i get errors like: emhttpd: error: get_filesystem_status, 6475: Operation not supported (95): getxattr: /mnt/user/... Those errors appear also for new Shares, never created before. I have no custom mounts under "/usr/mnt" and i have not modified any files. I can get it working again, if i create a share with the same name as before and than reboot. After that all shares are back again.
    1 point
  45. I have contacted Highpoint about creating a driver that works with the latest version of linux for the Rocket 750. so far, no luck. I'll keep bugging them. but hopefully a solution can be found, otherwise I'm stuck on 6.8
    1 point
  46. I installed RC1 and it booted fine. Dockers & VM's running. Hit the Stop button to bring down the array so I could change a SMB setting to disable netbios. The status bar kept reporting that it was retrying to unmount. Unable to pull up the syslog via the GUI so I telnetted in and tailed the syslog. Received the following over & over. Oct 15 01:17:30 NAS emhttpd: Retry unmounting disk share(s)... Oct 15 01:17:35 NAS emhttpd: Unmounting disks... Oct 15 01:17:35 NAS emhttpd: shcmd (250): umount /mnt/cache Oct 15 01:17:35 NAS root: umount: /mnt/cache: target is busy. O
    1 point
  47. Hi, while i was trying to test the new WSL2 feature it seems nester VM feature is not working anymore. Unraid 6 8 3 VM Windows 10 pro what have i done disabled all VMs, disabled VM Manager, unraid shell modprobe -r kvm_intel modprobe kvm_intel nested=1 edited VM xml template with the following entry under <cpu> section <feature policy='require' name='vmx'/> tried starting VM Manager, no way, always ended in <LIBVrt service failed to start> after some digging and results only showed check pathes
    1 point
  48. No issues in 6.7.2. Existing VMs (Server 2016) with hyper-v enabled won't boot after update -> stuck at TianoCore Logo Booting into recovery mode works, booting from a install DVD to load virtio drivers and modifiy the bcd works. Removing "hypervisorlaunchtype auto" from bcd makes the VM boot (but disables hyper-v) How to reproduce: (in my case, I hope its not just me ...) 1) new VM with Server 2016 Template 2) Install Server 2016/2019 3) enable hyper-v and reboot. It should either not reboot, boot into recovery or come back with a non working "VMbus"
    1 point
  49. Start with a single parity valid array, add parity2 and a new disk at the same time and you get: Invalid expansion. - You may not add new disk(s) and also remove existing disk(s). I know this isn't possible, but the error is wrong, I'm not removing any disks.
    1 point
  50. This is not a big deal and has been reported before on the general support forum, when a custom controller type is used for SMART you can see all the SMART attributes including temp, but for some reason temp is not displayed on the GUI or dash, other user was using the HP cciss controller and the same happens to me with an LSI MegaRAID controller, so looks like a general issue when using a custom SMART controller type. Note, I'm using two disks in RAID0 for each device here, so I can only choose SMART from one of the member disks
    1 point