hawihoney

Members
  • Content Count

    905
  • Joined

  • Last visited

Community Reputation

17 Good

About hawihoney

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Please have a look at the two screenshots. They show the editor windows of two existing VMs. The Broadcom devices are marked gray (unavailable) if they are in use. The USB devices stay available even if in use. IMHO, the USB devices must be marked gray (unavailable) as well if they are in use.
  2. Sorry to ask. What's the opposite (spinup) command? GUI and system now mismatch. Some disks "think" they are spun up - but aren't and vice versa. Plex puts contents of some disks in the trashbin because the system "thinks" disks are spun up and Plex can't read them.
  3. Why not add the unused disks to the array?
  4. Since 6.9.0 no SATA disks spin down in an array. Every 1 hour I do see the commands in syslog to spin down the disks, but they don't spin down. I stopped all docker containers, removed plugins to the minimum (Community Applications, Unassigned Devices, User Scripts, Nerd Tools for screen and python), no fancy fan scripts. The disks simply ignore the spin down commands. HBA is LSI 9300-8e, disks are mixed 6TB to 2TB. It worked for all years here up to and including 6.8.3. Now with 6.9.0 it stopped. Is there a way to brute force spin down SATA devices? If there's none I'm
  5. I add a new bug report because this is a little bit different. Here Unraid is running within VMs. Funny thing: Two identical VMs, one spins down, the other does not spin down. Both VMs are nearly bare, just two dockers (MakeMKV, MKVToolNix), four plugins (Community Applications, Unassigned Devices, User Scripts, Nerd Tools) are all that is running. All disks of the two VMs are attached to two LSI 9300-8e HBAs. Backplanes are identical. VM #1 (TowerVM01): Spins down as usual. VM #2 (TowerVM02): Does not spin down at all, hitting the Spin Down button does
  6. I do have two system/appdata folders. One on an unassigned device and one on an array device. The system share is set to "Use cache=No". E.g.: /mnt/disk17/system/ /mnt/disks/NVMe1/system/ VMs are running on array, Dockers are split between both. Docker/VM settings are set to: /mnt/disk17/system/docker.img /mnt/disk17/system/libvirt.img Should I expect problems?
  7. I did create two nearly identical Slackware VMs. The only difference between both is a different LSI HBA adapter passed through and a different IMG file. The GUI view of both VMs looks identical. But the XML view of both differs a lot. One of the XMLs is a lot bigger. It does contain 129 controller entries like shown below. What are these? Can I remove them? Starting that VM does need noticable more time. <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10
  8. Thanks. Seems that single-thread performance is more important then on Unraid parity-check/-rebuild than number of CPUs, cores.
  9. From my own experience it looks as if parity-check/-rebuild runs single threaded? Is this true?
  10. Seems that this has been fixed in PMS recently: https://forums.plex.tv/t/plex-media-server/30447/396
  11. This is a JBOD DAS. No rear connectors for a motherboard. It has two backplanes (24 and 21 drives). You can connect an external HBA like LSI-9300-8e at the IN-ports at the rear and connect other DAS enclosures at the OUT-ports at the rear. The backplanes come with expanders for these connections. On the pictures you can see EL2 type backplanes. These have expander and failover features. These boxes are heavy, professional and loud as hell. I love them. I've build several SC846 with corresponding DAS enclosures for Unraid. The biggest had 72 drives. With the current Unra
  12. First why: In the last two years we did setup several big Unraid servers with attached "Direct Attached Storage" enclosures. Usually the bare metal server did host several HBAs and every attached "Direct Attached Storage" enclosure was one individual Unraid array with it's own license stick and it's own HBA passed thru to every Unraid VM. With this setup we could drive several Unraid arrays (22 data drives, 2 parity drives each) as one server. Using the 36/45 drive enclosures from Supermicro now we would like to change that: Only one HBA per bare metal, usin
  13. Since two days I try to install a lightweight graphical Linux distribution in a VM. I started with Ubuntu and Debian. I did download their ISOs leaving everything else default on the VM creation page. Opening VNC I'm always stuck on a BIOS page. Sometimes I see a counter "Press a key ... startup.nsh". Whatever I do, the only thing I can do is enter a BIOS or leave a BIOS seeing the startup.nsh counter again.. Call me stupid but I don't get it. Installing a Windows VM on the other hand was easy as 1,2,3. What am I doing wrong? Any help is highly a