cholzer

Members
  • Posts

    164
  • Joined

  • Last visited

Everything posted by cholzer

  1. So if I would get this one then it comes with IT-Mode not IR-Mode? https://www.broadcom.com/products/storage/host-bus-adapters/sas-9300-8i
  2. Ah so if I would not have bought an LSI SAS 3008 and instead went straight for an 9300-8i then I would not have had to flash the IT BIOS? Cool, great to know! So you would still go for an 9300-8i in 2021, right?
  3. I am building an Unraid system for a friend and I was wondering which HBA to choose. It has to support (up to) 7 HDD's and 1 SSD (cache) If possible I'd like to avoid the stressful experience I had with switching/flashing my SAS 9300-8i to IT-Mode ๐Ÿ˜…
  4. After the update to RC2 everything seemed fine but then I started to notice that Unraid would wake up disks even when data was copied to the *cache* not the array. I also noticed that throughout the day some disks get spun up for no apparent reason - at times where everyone is asleep - there are no VM's nor dockers, Unraid is used as a plain NAS
  5. The same 2 disks (sdb is the parity disk) seem to get spun up to read SMART (?) throughout the day. Time is AM, everyone was asleep during that time, no one accessed the NAS. There are no VM's and no dockers in Unraid, I use it as a "simple" NAS. Fusion-MPT 12GSAS SAS3008 PCI-Express in IT Mode The following plugins are installed: CA User Scripts Community Applications Dynamix Cache Dirs Dynamix Schedules Dynamix SSD Trim openVMTools_compiled Recycle Bin Tips and Tweaks Unassigned Devices Unassigned Devices Plus (Addon) Jan 12 03:02:13 NAS emhttpd: read SMART /dev/sde Jan 12 03:02:32 NAS emhttpd: read SMART /dev/sdd Jan 12 04:01:19 NAS emhttpd: spinning down /dev/sdd Jan 12 04:01:21 NAS emhttpd: spinning down /dev/sde Jan 12 04:07:29 NAS emhttpd: read SMART /dev/sde Jan 12 04:27:05 NAS emhttpd: read SMART /dev/sdd Jan 12 05:31:27 NAS emhttpd: spinning down /dev/sdd Jan 12 05:31:27 NAS emhttpd: spinning down /dev/sde
  6. I just have the spindown delay set to 30minutes. That works fine for me using an LSI SAS 2008 in IT-Mode
  7. Generally speaking disks spin up / down fine for me in RC2, but there is one usecase where I have noticed unnecessary spin ups. Steps to reproduce: 1. create a share which is set to "cache only" 2. wait for unraid to spin down all disks 3. access that "cache only" share (in my case from a Windows 10 PC where it is mapped as a network drive) 4. copy a file to that "cache only" share (while all other array disks are spun down!) Expected behaviour: file gets copied to the cache drive, all array disks stay spun down Result: (some) disks spin up, log shows "read smart" entries for those array disks Jan 6 05:53:07 NAS emhttpd: read SMART /dev/sde Jan 6 05:53:26 NAS emhttpd: read SMART /dev/sdd
  8. Thank you! I was only looking at the log next to the disk in UD which did not show anything. Looking at the correct log I guess there is indeed something wrong with that partition. I did shutdown the PC as usual though before I removed it. Well... lets investigate. Dec 30 22:17:36 NAS unassigned.devices: Mount warning: The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Falling back to read-only mount because the NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting.)
  9. RC2 with 2020.12.19 For a long time I used UD to share a 6TB drive, which was previosuly used in a Windows PC and only contains a single NTFS partition. That drive/smb share still works nicely! ๐Ÿ‘ However today I added a 512GB SSD (using the same settings in UD as the 6TB drive). When I access that share from a Windows PC it tells me that it is "write proteced"!?!? Even though "read only" is not enabled in UD. The settings are identical to the 6TB drive and the same SMB user is used as for the 6TB drive. Another thing I just noticed is that the "Change Disk UUID" dropdown list in the UD settings is empty.
  10. Upgraded my Unraid Server (which runs inside ESXi) to RC2 about a day ago. Everything is working nicely so far! Including spin down/up of disks. (note I do not use any dockers nor Vm's in UNRAID, it is a pretty simple setup with an LSI 2008 in IT mode, 5x 8TB HDD's Array, 1x 500GB cache SSD and 1x 6TB share unassigned devices)
  11. Thx I just read your reply in the 6.9RC1 thread! But I guess I will wait for RC2 which fixes some spinup/down issues remaining in RC1?
  12. So I just need to upgrade to 6.9RC1 to get that functionality. Guess I will check the thread about known issues in the RC and if it is worth to wait for the final. Thank you! Good thing I RTFM
  13. Oh, so the current "Spin down disks?" setting in the Unassigned Devices configuration will only start to work with 6.9 RC1?
  14. Cache Plugin Seems like the "include folders" feature does not support folders that use a & Fotoalbum Anita & Chris Software & Resources Dec 15 07:37:52 NAS cache_dirs: ERROR: included directory 'Fotoalbum\ Anita\ &\ Chris' does not exist. Dec 15 07:37:52 NAS cache_dirs: ERROR: included directory 'Software\ &\ Resources' does not exist. Could the plugin support that in the near future?
  15. Hi there! :) My "unassigned devices" hdd does not spin down automatically while the array devices spin down nicely (see attached image). Cache Dir Plugin uses "include" folders as recommended. Updated all plugins to the latest version. All drives connected to the same HBA you can spin down/up the array devices by clicking on the dot left to the disk#, could that functionality be added to the unassigned devices plugin where clicking on the icon does nothing. :)
  16. Just want to let you know that everything went well!
  17. @StevenD thanks a lot! There is no kind of configuration required, right?
  18. Hi guys! I have a rather scary task ahead of me. ๐Ÿ˜ฌ My current config: - core i5 system on an Asus mainboard, 8GB - LSI 9200-8i (IT-Mode) --- 1x 8GB Ironwolf Parity --- 4x 6GB Ironwolf Data - 1x 120GB SSD Cache (no dockers, no VM) Now I want to only move my Ironwolf HDD's and the unraid USB stick to a new system: - Xeon E5-2620, Supermicro X10SRi-F, 32GB - LSI 9300-8i (IT-Mode) - 1x 500GB SSD Cache Am I correct that this should basically be "plug and play"? unraid should boot up and the array should be present (minus the cache SSD which I will replace with a new one) Anything special that I have to pay attention to? ๐Ÿ˜… Thanks in advance!
  19. This is the vbios I used (and removed the header from): https://www.techpowerup.com/vgabios/213099/asus-rtx2070super-8192-190623 The other Asus 2070 Super cards on Techpowerup are Strix, I do not have a Strix I have this one (picture matches as well). I don't know why I get a blackscreen with OVML but not with SeaBIOS, but that is what is happening on my rig. ๐Ÿ˜… But what is confusing me now is that when I use the keyboard (which is passed through to the vm) then I can't control the VM, instead the terminal of unraid is showing up again. I have ordered an USB PCIE card now to pass the entire card to the VM and connect m+k to that card. Passing through one of the 2 mainboard USB controllers sadly did not work.
  20. Thanks for your reply! I downloaded the bios from techpowerup and removed the header with HxD 10 seconds ago I just got it to work! I must use SeaBios, with OVMF it does not work. OVMF+i440fx-4.2 -> blackscreen OVMF+Q25-4.2 -> blackscreen SeaBIOS+i440fx-4.2 -> works SeaBIOS+Q25-4.2 -> works Next issue is that as soon as I use the keyboard I passed through to the VM, the unraid terminal comes back. ๐Ÿ˜…
  21. Making my first baby steps with Win10 VM's in Unraid. I'm trying to passthrough a ASUS RTX 2070 Super to the VM but while the VM does boot, I only get a blackscreen. The IMMO Group of the RTX2007: [10de:1e84] 08:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] (rev a1) [10de:10f8] 08:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1) [10de:1ad8] 08:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1) [10de:1ad9] 08:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1) I have added the last 2 to syslinuxconfig vfio-pci.ids=10de:1ad8,10de:1ad9 as mentioned here https://wiki.unraid.net/Unraid_6/Frequently_Asked_Questions#I.27m_having_problems_passing_through_my_RTX-class_GPU_to_a_virtual_machine This is my VM XML <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>c1f234d5-f238-9111-c751-6ae64addbfaa</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>9437184</memory> <currentMemory unit='KiB'>2097152</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>1</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/c1f234d5-f238-9111-c751-6ae64addbfaa_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='1' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:72:6b:0d'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/domains/vbios/Asus.RTX2070Super.8192.190623_noHeader.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> I also tried to "group" all 4 devices of the RTX2070, I tried with and without the vbios, I tried with "append iommu=pt pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1" (and rebooted every time OFC! ) but I always get a blackscreen. Anyone an idea what I'd doing wrong? With VNC as GPU the VM works fine. System is an Ryzen 3800x, Asus Crosshair VIII
  22. That would be awesome! Do you also need a docker in unraid that the RPi sends its command to?
  23. Is that "www.home-assistant.io"? as far as i understand you use the mobile app to launch the VM, not a physical push button, is that right?
  24. Origin, Uplay and the EPIC Launcher did not like that last time I tried which is why I'd like to go a different route. I suppose I could use the unassigned devices plugin and then passthrough a disk to the VM and share it from there to my LAN. Don't need any parity for the game library.