Leaderboard

Popular Content

Showing content with the highest reputation on 12/02/23 in all areas

  1. Oh come on ! Just 50 mins after I updated to 6.12.5 ! *mumbles* I'll look into it later *mumbles*
    7 points
  2. The OpenZFS team has published a true fix (rather than just a mitigation) for the ZFS data corruption issue, so we are publishing Unraid 6.12.6 with that fix. This also includes an IPv6 bug fix. All users are encouraged to read the release notes and upgrade. Upgrade steps for this release As always, prior to upgrading, create a backup of your USB flash device: "Main/Flash/Flash Device Settings" - click "Flash Backup". Update all of your plugins. This is critical for the NVIDIA and Realtek plugins in particular. If the system is currently running 6.12.0 - 6.12.4, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type: umount /var/lib/docker The array should now stop successfully (issues related to Docker not stopping should be resolved in this release) Go to Tools -> Update OS. If the update doesn't show, click "Check for Updates" Wait for the update to download and install If you have any plugins that install 3rd party drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. Reboot This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip. Release Notes
    4 points
  3. I have released 2023.12.02, If you have tested the Dev release you may need to update the Fans section as I have changed name to FAN123456 for All for Dell systems - Add FAN Control support for Dell systems. - Add Supermicro X12 options. Thanks to C4Wiz for PR. - Add ASRock EP2C612WS Support thanks to WillGunn for PR. - Set shift class in table sorter thanks to kcpants for PR. - Refactor code to allow easier models+boards to be added.
    2 points
  4. I added more supporting info and I also submitted a bug report for WinFsp. https://github.com/winfsp/winfsp/issues/534
    1 point
  5. Which OS version are you running before. I guess you have have upgraded from pre 6.12, as Rydx said you need to upgrade to the current version from CA. The original versions before Mine and theirs where not compatible due to breaking dashboard. I have passed the maintenance of my version to Rysz
    1 point
  6. I just updated from 6.12.5 to 6.12.6 and cannot reproduce the issue you described. There's also no reason my current version of NUT wouldn't work with version 6.12.6... Perhaps you were on some other deprecated version of NUT (maybe upgrading from a very old OS?) Please make sure to install the version by me (Rysz/desertwitch), which is the only actively maintained one.
    1 point
  7. Bin noch nicht so bewandert mit unRaid und lerne noch, aber wenn ich mich nicht täusche musst du unter "Tools/new Config" eine neue Konfiguration anlegen da sich dein Setting/Array ändert.
    1 point
  8. Did an upgrade from 6.11.4 to 6.12.6 and ... worked flawless. Docker, VMs ... everything fine.
    1 point
  9. Ich werde die Tage mal meine Platte abnehmen um zu schauen, was eine im Standby zieht. Vorher war ja noch die SATA SSD mit dran, obwohl die im idle wohl eher weit unter 0,5W lag. Laut Datenblatt sind das die Werte von meiner. Aber gut, das wusste ich ja und sind ja auch Enterprise Platten. Wegen der Parity werde ich wohl genauso machen. Bei den paar wichtigen Daten, die ich drauf habe, macht das mit den Parity Syncs wenig Sinn und der Mehrverbrauch steht in keinen Verhältnis zum Nutzen. So hab ich dann wenigsten noch eine Ersatzplatte im Schrank liegen. 🙂 Das würde mich auch Kürre machen, aber ja, irgendwann muss man wissen, wann Schluss ist. 😄 Gut, dass du sag geschrieben hast. Dann werde ich wohl auch erst mal auf der v6.12.4 bleiben.
    1 point
  10. Ich habe einen 9300-8i (habe ich damals neu in DE gekauft) und einige gebraucht gekaufte Megaraid 9361 (-8i und x16i). Mir ist ein solcher Umschalter in der Firmware/BIOS nicht aufgefallen. Meiner Erfahrung und Wissens nach sind die PCIe 3.0 basierten Megaraid der Generation nicht umschalt- oder umflashbar. Das war der Grund, warum ich in meinem großen unraidsystem einen Adaptec der Serie 7 verwende. Die lassen sich auf HBA Modus umschalten.
    1 point
  11. Enable the syslog server and post that after a crash.
    1 point
  12. Thank you. It is rebuilding at the moment.
    1 point
  13. If you donate to a charity that supports a good cause like children in Ukraine or just any cause you feel strongly about or just even to Unicef or some other organisation then you have done more than 99.99% of people and everyone can raise a beer to you for your generosity.
    1 point
  14. Ich weiss nicht ganz welches Modell er meint (da die Dinger aktuell ja eher unter Broadcom gelistet werden), aber wenn er den Broadcom SAS 9300-8i, PCIe 3.0 x8 https://geizhals.de/broadcom-sas-9300-8i-lsi00344-a976112.html verwendet: doch, das ich ein HBA ohne Raid. Sollte er hingegen ein Megaraid-Modell verwenden, das ist sehr wohl ein Raidkontroller und bei der PCIe 3.0 Version meist auch wirklich ohne HBA Support. Die normale (HBA) 9300 Serie braucht man nicht umzuflashen, die Megaraid der selben Generation haben leider keine IT-Mode Firmware. Die größeren Ausführungen der Megaraidserie haben einen Cache und bieten deshalb auch BBU Lösungen (mit Kondensatoren und Flashpacks) an.
    1 point
  15. Thanks @JorgeB my unRaid server has been stable for over 24 hours now after changing from Macvlan to IPvlan.
    1 point
  16. This is an issue with the in tree kernel r8169 driver, hopefully it will be fixed soon in a future kernel, in the meantime you can use the Realtek driver plugin, it installs the out of tree r8125 driver and that one still works with jumbo frames.
    1 point
  17. Ich schätze, Deine Frage erledigt sich, nach der Lektüre dieser Punkte: https://docs.unraid.net/unraid-os/manual/changing-the-flash-device/ "...The device has been lost or stolen - you have misplaced your USB flash device, or it has been stolen..." "...What if I can't backup my device?..."
    1 point
  18. 1 point
  19. Updated from 6.12.5. No issues to report. I appreciate the updates, but it does make me nervous when sequential updates fly out at this rate. Thanks
    1 point
  20. Looks like it was a temporary issue that resolved itself. Tried a new USB with clean 6.12.6 and that worked fine. Was installing 6.12.5 (just to check), and tried the original USB drive while that was installing and networking worked with it as well. I had tried resetting various parts of the network yesterday before posting this topic - it hadn't helped. Today without actively changing anything, everything was working. While I'd love to say I knew what root cause was, I'm good with a working system. Thanks for the help!
    1 point
  21. One last follow-up. Everything still seems to be rock solid with everything running. Seems maybe the docker folder and appdata for certain dockers was bad/corrupt somehow. Thank you, @JorgeB, for helping guide me through this!
    1 point
  22. While going to reboot this morning I noticed 6.12.5 was available. I applied the update and rebooted. It seems things are back to normal.
    1 point
  23. Enter your PAPERLESS_REDIS without brackets
    1 point
  24. There's no driver for that device, IIRC there's no Linux in tree driver for that, at least I cannot find one, there have been some previous requests, not sure if anything changed since:
    1 point
  25. While this is specifically for tunnled access, the same would apply for lan access, depending on if you use ipvlan or macvlan you can do the following.
    1 point
  26. You'll need to do one of two things to connect to the VM using Tailscale: Install Tailscale in the VM, then use that to connect to the services running in the VM, or Enable subnet routing for the Tailscale running on Unraid
    1 point
  27. That is correct, my macvlan trace has stopped in the boot process and logging after the change. That's my bad, I could be more clear in my report. Trying to follow rules and not be as public with my information for security. Good to know that the bridge setting, this what removes the gui options. eLpresidente That is correct. After deleting the network config file and having unraid recreate and set the setting per the release notes when you get back to the gui if you set bridging off you will be using macvlan. This is the process: step 1: Connect a monitor to see boot process of unraid and get ip at boot. step 2: make a backup of the file before deleting the file for recover step 3 delete the file /boot/config/network.cfg --Step 3.5 you should disable VM and docker before reboot Step 4 reboot Step 5 acquire unraid's new IP address Step 6 sign in to unraid Step 7 settings > network Folow https://docs.unraid.net/unraid-os/release-notes/6.12.4/ release notes for macvlan *Disable Bridging Example of my settings for macvlan: step 8 reboot and check log / fix plug to see if you have any macvlan traces. step 8.5 enable VM and docker. before reboot and check again this will no use vhost@eth and or macvtap0@eth0 for the ip for the docker network for macvlan connections to and from the docker.
    1 point
  28. Tjo... Geht nicht. Nach dem installieren des RTL8168/RTL8111 Plugins und einem Reboot hat man zwar wieder: Trotzdem bleibt das System stur auf maximal C3 als lowest C-Pkg State. Ich lass es jetzt nochmal bissl laufen aber ich glaub nicht das ich es jetzt noch unter 9-10W Mean bekomme. Hab nochmal das Github Repo aus dem das Plugin kommt angeschaut. Die schreiben da für den r8125: r8125 has enabled multiple queue and disabled ASPM. Werd ich wohl reverten müssen und auf 6.12.4 bleiben und beten das mit 6.13 neue Treiber enthalten sind die das wieder können...
    1 point
  29. It will be included in 6.13 onwards so no need for the plugin in that version onwards.
    1 point
  30. @BiNiCKNiCH Ich habe heute auf 6.12.5 geupdatet. Seit dem Update nur noch C3 Pkg state. Hab statt TLP dann wieder powertop versucht etc. sudo lspci -vv | awk '/ASPM/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]+|ASPM |Disabled;|Enabled;)' Gibt aus: Man sieht die Netzwerkkarte und die PCI Bridge sind auf ASPM Disabled. Waren sie mit 6.12.4 nicht. Ich vermute die haben am Realtek Ethernet Treiber rumgespielt. (Auf meinem 2. Server W480 mit Intel Ethernet Chipsatz habe ich das Problem nicht). Wäre schon wenn Du suchmal obigen Befehl ausführen würdest um zu sehen ob Du das Problem auch hast. Ich geh dann wahrscheinlich auf 6.12.4 zurück. Wenn Du es auch hast würde ich mal weiter rumfragen bei Leuten die auch RTL Chipsatz bei Ethernet nutzen. Gruss, Joerg
    1 point
  31. I have now adapted the plugin so that users are able to install it on Unraid 6.12.4 without any workarounds and to make sure that packages for upcoming Unraid versions are available. The new plugin also is a bit more verbose, made a few changes under the hood which makes installation a bit more easy and reliable -> this means you don't need to have a Userscript anymore that runs on startup. The plugin is now also part of the Plugin-Update-Helper, this will basically ensure when you update Unraid to a newer version that it will download the new plugin package version even before you reboot to the new Unraid version (keep an eye on your Unraid Notifications since the Plugin-Update-Helper will inform you there when the download is finished or failed). You can install the new plugin with this link (currently it's not on the CA App): https://raw.githubusercontent.com/ich777/unraid-i915-sriov/master/i915-sriov.plg Thanks to @domrockt for helping me test the plugin if everything is working properly since I have no compatible hardware on hand! (This plugin requires Unraid 6.12.4+)
    1 point
  32. Have you checked where any vdisks for the VM are located? According to the diagnostics it looks like they may be on disk1 or disk4 as that is where the ‘domains’ share currently has content. You also have the mover direction for that share set to be hass->array - if you want mover to transfer files to that pool it needs to be the other way around.
    1 point
  33. Hello everyone, I need your insights regarding a specific situation we're dealing with. We have a Unraid system in our house that's running with two graphics cards (two Nvidia 960s), each dedicated to its own virtual machine. This setup has been working perfectly well for us. However, we are now encountering a problem as we need to use a third VM. Unfortunately, our system has no room for additional graphics cards. I've seen on Google that it's possible to share a GPU between multiple VMs, specifically with Nvidia cards as demonstrated in this video: https://www.youtube.com/watch?v=Bc3LGVi5Sio. But is it possible to do the same in Unraid, where multiple VMs can share a single graphics card, running at the same time? Also, in case this functionality is supported in Unraid, which graphics card should I buy that will work properly under such conditions? I noticed several threads on the Unraid forums discussing this topic, but most of them seem to be quite dated. Therefore, I am curious if this is a possibility as of 2023 or if it's something that will be implemented in the near future. Any advice or guidance would be greatly appreciated! Thank you in advance.
    1 point
  34. Der Verbrauch schwankt zwischen 9,6 und 12,5W mit 2 NVME in beiden M.2 Slots und 5 SSDs (2 SSDs am Mainboard angeschlossen, 3 SSDs an der ASM1166 Karte): Die C-States schwanken manchmal zwischen C6 und C8, aber C8 sieht man am häufigsten: Auch wenn der Verbrauch gut ist, finde ich es nach wie vor komisch, dass man den PCIe Slot 1 von der CPU nicht benutzen "darf" und er dann direkt auf C2 zurückfällt. Ich mach dazu auch mal eine Support-Anfrage bei Gigabyte auf.
    1 point
  35. Hi i would also be happy about fusion iodrive driver I have this: Mass storage controller: SanDisk ioDrive2 (rev 04) HP ioDrive 2 768GB the current ones are here: these also run with the latest Linux https://github.com/RemixVSL/iomemory-vsl the drive also works great with windows 10, debian, truenas usw. the drive has an write endurance of 11 petabyte ! and you get this drives for 50-100 bucks on ebay with garantie and only used with 5-10%
    1 point
  36. You can setup 2 wordpress containers in unraid using the following steps. 1. Go to the docker tab on the unraid web-UI. 2. Scroll to the bottom and click "add a container" 3. Click on template, find wordpress. This should Autofill the information. 4. Choose a different name such as "wordpress-2" 5. Change the host port: 8081 **chose a port that is not in use 6. Change the host path: /mnt/user/appdata/wordpress-2 7. Change use a different WORDPRESS_DB_NAME. ** Instead of wordpress-2, you can call it something else. You will then have to configure your reverse proxy to point each subdomain to the appropriate containers.
    1 point
  37. Hi, I wanted to upgrade my 1 parity setup of a 10 TB disk parity to a 2 parity setup with 16 TB Disks. The old parity disk I want to become a data disk after the parity swap procedure. I read the "https://wiki.unraid.net/The_parity_swap_procedure" article, but in my case I don't want to remove an old data disk. I first inserted the new first 16 TB disk and precleared the disk. The array ia stopped. I unasigned my old 10 TB parity disk in the parity slot. I now assigend the new 16 TB disk in the first parity slot an got a blue indicator. I now could start the array with "Start will start Parity-Sync and/or Data-Rebuild." But like in the article I want the Copy button, with a statement indicating "Copy will copy the parity information to the new parity disk". When I assign the old 10 TB parity disk in a free data disk 7 slot both indicator went blue, but I only get the message "You may not add new disk(s) and also remove existing disk(s)." besides the start button and not the Copy button, with a statement indicating "Copy will copy the parity information to the new parity disk". What am I doing wrong? I am using unraid 6.9.2. Should I only assign the new 16TB drive in the parity slot and do the data rebuild? I thought the copying procedure as described is faster? Thank you
    1 point
  38. With the array started type: btrfs dev del /dev/sdX1 /mnt/cache Replace X with the correct letter and adjust mountpoint if needed, i.e., using a different named pool. When that's done, you'll get the cursor back, you need to reset the pool assignments, to do that: Stop the array, if Docker/VM services are using the cache pool disable them, unassign all pool devices, start array to make Unraid "forget" current cache config, stop array, reassign remaining devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.
    1 point
  39. It's been a while since I played with this since I ended up selling my fusionIO drive in favor of U.2 NVMEs, but ich777 is correct. You can absolutely use his Docker container to build the kernel modules as a plugin, but it'll be a massive pain to keep up with releases. Here's my repository with the build script I was using for testing. YMMV: https://github.com/juchong/unraid-fusionio-build
    1 point
  40. I've been trying to figure this out, so I thought I would post what I have so far: There is a discord forum for fusionIO, here. This tutorial gave me some info, sounds like have to get source code, build a new kernel. This is the github source code link, here Need to build a custom kernal for Unraid, here (unraid uses slackware, I believe). My thought was to build a Slackware VM, here, and figure this out, build the kernel. (Then could I copy the kernel to the usb boot drive? dont know yet). Be forewarned, Slackware is a bit of different animal, only CLI, and there was a lot of prep to convert (deb packages) to be used. A lot of gaps in the instructions, as I am primarily ubuntu user. Googling on how to get fusionIO to work with Unraid, you might come across threads about drivers and paths, which has been requested and released. I think we need a github project for fusionIO + Unraid so that people can contribute to this effort. as of this post, there are still lots of fusionIO boards on ebay. They are not the fastest, but for the home brew server crowd, this is a great, high-endurance drive I would love to get working. Hope this helps somebody
    1 point
  41. I am also interested in taking 1 GPU, virtualizing it into multiple vGPU resource pools then sharing those pools to different VMs. For example... 10GB Card split into 10 x 1GB vGPUs to be shared across 10 windows VMs each thinking they have their own discreet 1GB GPU. Is this possible now on unraid?
    1 point
  42. Tried compiling with the latest GIT release (v4.2) as I am kernel 4.19 on unraid 6.8.3 still. Managed to compile it, but needed to modify the .sh scripts - I assume no one else has run into this, so it must be something weird with my unraid build env. Basically some instances of "cat" seems to be broken reporting 'standard output: No such file or directory'.... Very odd, not going to dwell on it for now, managed to work around this, and load the module (left some notes below as to what I did) . I added the script you linked to my /lib/udev/rules.d/95-fio.rules , but don't see any new devices show up. There wasn't any info on how exactly to use this either, so I may need to run something to kick off the scirpt and create the new udev devices. I am a little tempted to manually populate and run the below and just see, but I will wait : /usr/bin/fio-status --field iom.format_uuid /dev/%k", SYMLINK+="disk/by-id/fio-%c /usr/bin/fio-status --field iom.format_uuid /dev/%P", SYMLINK+="disk/by-id/fio-%c-part%n If interested, here is the error output from the make make[1]: Leaving directory '/mnt/user/dev/iomemory-vsl/root/usr/src/iomemory-vsl-3.2.16' ./kfio_config.sh -a x86_64 -o include/fio/port/linux/kfio_config.h -k /lib/modules/4.19.107-Unraid/build -p -d /mnt/user/dev/iomemory-vsl/root/usr/src/iomemory-vsl-3.2.16/kfio_config -l 0 cat: standard output: No such file or directory 1609869599.051 Exiting Preserving configdir due to '-p' option: /mnt/user/dev/iomemory-vsl/root/usr/src/iomemory-vsl-3.2.16/kfio_config make: *** [Makefile:88: include/fio/port/linux/kfio_config.h] Error 1 kfio_config.sh complains that "cat: standard output: No such file or directory" which is ridiculous, as cat certainly exists, and is in the path... root@cerebellum:/mnt/user/dev/iomemory-vsl/root/usr/src/iomemory-vsl-3.2.16# env | grep PATH MANPATH=/usr/local/man:/usr/man NODE_PATH=/usr/lib64/node_modules PATH=.:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin root@cerebellum:/mnt/user/dev/iomemory-vsl/root/usr/src/iomemory-vsl-3.2.16# which cat /usr/bin/cat Here is the line which it's stopping at, I added an echo to identify: : "${TMP_OUTPUTFILE:=${CONFIGDIR}/kfio_config.h}" echo "cat instance 5" cat <<EOF if I modify the line with the cat commands in kfio_config.sh: from this "cat <<EOF" to "echo <<EOF" ... to use echo instead of cat, we progress !! So the change now looks like: : "${TMP_OUTPUTFILE:=${CONFIGDIR}/kfio_config.h}" echo "cat instance 5" #cat <<EOF echo <<EOF and re-run the line from the make file script, it completes now! ./kfio_config.sh -a x86_64 -o include/fio/port/linux/kfio_config.h -k /lib/modules/4.19.107-Unraid/build -p -d /mnt/user/dev/iomemory-vsl/root/usr/src/iomemory-vsl-3.2.16/kfio_config -l 0 So, stop and resume the original make command, which finally finishes: Ok: altered version for iomemory-vsl.ko appended 2a18ca0 ### I don't see any new /dev/sdX devices, I assume the linked script goes somewhere like /lib/udev/rules.d/95-fio.rules ### insmod iomemory-vsl.ko ## dmesg -T | tail -100 [Tue Jan 5 13:38:33 2021] <6>fioinf VSL configuration hash: 8f82ea05bdf1195cb400fb48e4ef09fc49b3c1aa [Tue Jan 5 13:38:33 2021] <6>fioinf [Tue Jan 5 13:38:33 2021] <6>fioinf Copyright (c) 2006-2014 Fusion-io, Inc. (acquired by SanDisk Corp. 2014) [Tue Jan 5 13:38:33 2021] <6>fioinf Copyright (c) 2014-2016 SanDisk Corp. and/or all its affiliates. (acquired by Western Digital Corp. 2016) [Tue Jan 5 13:38:33 2021] <6>fioinf Copyright (c) 2016-2018 Western Digital Technologies, Inc. All rights reserved. [Tue Jan 5 13:38:33 2021] <6>fioinf For Terms and Conditions see the License file included [Tue Jan 5 13:38:33 2021] <6>fioinf with this driver package. [Tue Jan 5 13:38:33 2021] <6>fioinf [Tue Jan 5 13:38:33 2021] <6>fioinf ioDrive driver 3.2.16.1731 4870ad45b7ea 2a18ca0 a loading... [Tue Jan 5 13:38:33 2021] <6>fioinf ioDrive 0000:04:00.0: mapping controller on BAR 5 [Tue Jan 5 13:38:33 2021] <6>fioinf ioDrive 0000:04:00.0: MSI enabled [Tue Jan 5 13:38:33 2021] <6>fioinf ioDrive 0000:04:00.0: using MSI interrupts [Tue Jan 5 13:38:33 2021] <6>fioinf ioDrive 0000:04:00.0.0: Starting master controller [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Adapter serial number is 602555 [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Board serial number is 602555 [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Adapter serial number is 602555 [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Default capacity 640.000 GBytes [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Default sector size 512 bytes [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Rated endurance 10.00 PBytes [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: 85C temp range hardware found [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Firmware version 7.1.17 116786 (0x700411 0x1c832) [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Platform version 10 [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Firmware VCS version 116786 [0x1c832] [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Firmware VCS uid 0xaeb15671994a45642f91efbb214fa428e4245f8a [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Powercut flush: Enabled [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: PCIe power monitor enabled (master). Limit set to 24.750 watts. [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Thermal monitoring: Enabled [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0.0: Hardware temperature alarm set for 85C. [Tue Jan 5 13:38:34 2021] <6>fioinf ioDrive 0000:04:00.0: Found device fct0 (Fusion-io ioDrive 640GB 0000:04:00.0) on pipeline 0 [Tue Jan 5 13:38:36 2021] <6>fioinf Fusion-io ioDrive 640GB 0000:04:00.0: probed fct0 [Tue Jan 5 13:38:36 2021] <6>fioinf Fusion-io ioDrive 640GB 0000:04:00.0: sector_size=512 [Tue Jan 5 13:38:36 2021] <6>fioinf Fusion-io ioDrive 640GB 0000:04:00.0: setting channel range data to [2 .. 4095] [Tue Jan 5 13:38:36 2021] <6>fioinf Fusion-io ioDrive 640GB 0000:04:00.0: Found metadata in EBs 168-169, loading... [Tue Jan 5 13:38:36 2021] <6>fioinf Fusion-io ioDrive 640GB 0000:04:00.0: setting recovered append point 169+198180864 [Tue Jan 5 13:38:36 2021] <6>fioinf Fusion-io ioDrive 640GB 0000:04:00.0: Creating device of size 640000000000 bytes with 1250000000 sectors of 512 bytes (213422 mapped). [Tue Jan 5 13:38:36 2021] fioinf Fusion-io ioDrive 640GB 0000:04:00.0: Creating block device fioa: major: 254 minor: 0 sector size: 512... [Tue Jan 5 13:38:36 2021] fioa: fioa1 fioa2 [Tue Jan 5 13:38:36 2021] <6>fioinf Fusion-io ioDrive 640GB 0000:04:00.0: Attach succeeded. root@cerebellum:/mnt/user/dev# /usr/bin/fio-status Found 1 ioMemory device in this system Driver version: 3.2.16 build 1731 Adapter: Single Adapter Fusion-io ioDrive 640GB, Product Number:FS1-004-640-CS, SN:602555, FIO SN:602555 External Power: NOT connected PCIe Power limit threshold: 24.75W Connected ioMemory modules: fct0: Product Number:FS1-004-640-CS, SN:602555 fct0 Attached ioDrive 640GB, Product Number:FS1-004-640-CS, SN:602555 PCI:04:00.0, Slot Number:5 Firmware v7.1.17, rev 116786 Public 640.00 GBytes device size Internal temperature: 54.14 degC, max 54.63 degC Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00% Contained VSUs: fioa: ID:0, UUID:0630fa34-0505-1441-9d5b-6d79afe74709 fioa State: Online, Type: block device ID:0, UUID:0630fa34-0505-1441-9d5b-6d79afe74709 640.00 GBytes device size
    1 point
  43. I put in a feature request with the guys maintaining the open source driver to change the dev name as that appears to be the only sticking point. Their reply was that they're unable to control the block names. They did give a suggestion about mapping names in udev but I think I read early in this thread that that had already been tried. It looks, to me anyway, like we're reliant on Lime to change the logic of their device identification logic to include the fio* prefixes. Here's the thread on git if anyone is interested in reading their full responses. Link
    1 point
  44. No, completely the opposite, I expect the regular user to not be able to do most of what I outlined. I'm not saying the easy button feature isn't needed, I'm pointing out that a linux savvy programmer COULD give us a plugin to handle it, but that fact that none have is evidence that if it's going to happen, it's probably going to have to wait in line with the myriad of other features that users are asking limetech itself to integrate. Hopefully the recent paid staff additions will accelerate the implementation of these many wanted features, but I'm not holding my breath. A great many (almost all?) of the features currently being added to the core of unraid started life as user created add ons, and have been slowly incorporated. The fact that nobody has stepped up and written a community supported plugin to do this specific function says something. I was simply pointing out what obstacles are there for a community member to set this up. Doesn't mean it won't eventually happen. Who knows, maybe limetech has this all lined up for the next major beta after the 6.9 series. I've also left out some major issues that would have to be addressed with a slot removal function, namely the difficulties posed by the very thing that makes unraid different. Each drive slot is an independent file system, and can be referenced with individual specificity if desired. That means that if you remove a slot and there are still file paths configured to write to it directly, bad things will happen. There are so many different places that could be done, it's difficult to cover your bases, as simply moving the files to a different path doesn't mean the application that wrote them there will be able to find them at the new location. If everyone was forced to use fuse /mnt/user paths, there would be less of an issue, but that's not how things are right now. /mnt/diskX is a perfectly valid path to save and retrieve files, and some people use direct paths for practically everything for speed and organizational reasons. Also, the /mnt/user shares have include and exclude rules that reference the disk slots, you have to reconcile what happens when one of those slots is gone, and how to allocate the data that is to be moved from the slot to be vacated. It's not as cut and dried as it seems.
    1 point
  45. I have to disagree. unRAID is being billed as an appliance that gives you enterprise features for less money, and does lots of things without requiring as much user knowledge as a home-spun setup using a more common distro. If you're saying a regular user should be totally ok with using the terminal zero a drive with dd, then you're kind of missing the point. I could actually see this saving time, in the sense that a user kicks off the drain operation, then comes back a day or two later, can restart the array at a more convenient time, and yank the drive. How I'd see it happening is more like: User selects drive to drain Data is evacuated to other drives Drive is zero'd out System leaves pending operation to remove drive from config on next array stop Notification is left for user of pending array operations User stops array Config is changed such that drive is removed User starts array So this has gone from requiring several integrations in the UI, potentially installing a new plugin, as well as terminal commands (that have to be left running for ages, mind), to 2 clicks on a default unRAID install. Look, at the end of the day Linux gurus are always going to scoff at the noobs, but if it's a storage appliance first, and one that sells itself on flexibility at that (different size/types of drives, etc.) then removing a drive is what I would call a fundamental - and likely expected from a new user's point of view - operation.
    1 point
  46. This is something that if limetech manages to implement, it will be great news. Only vmware does that for now. Sent from my Pixel 2 XL using Tapatalk
    1 point
  47. Hi folks! I've been able to get the driver to compile on 4.x kernels (Unraid 6.8.3) cleanly using the Unraid-Kernel-Helper docker. I'm working out how to get unRAID to recognize the drives now. Stay tuned!
    1 point
  48. As a technical note, FusionIO/WesternDigital are still supporting their products with drivers current as of 01/30/2020. Energen....these are still great cards, and hella not obsolete for this application. I have a ioDrive2 and a SX350 that can do ~2GB/s+ R/W at 8W idle through a ESXi6.7 VM with super low latency like nvme. If i were to make a guess, I'd say there are prolly ~50-60 of us in the forums that would integrate FusionIO prducts into our builds and at least that many more that would be inclined to buy these cheap used cards in the future for that purpose. No, we aren't a majority... but it's not 5 or 6 or a dozen. There's dozens of us... Tom, the CEO and admin, chimed in on this thread and put the ball in our court. If we want to merge this into his repo, somebody will need to work on the 5.X kernel integration. I would suggest following this proxmox guide as a starting point. -> https://forum.proxmox.com/threads/configuring-fusion-io-sandisk-iodrive-iodrive2-ioscale-and-ioscale2-cards-with-proxmox.54832/
    1 point
  49. I figured it out. I needed to specify the byte offset of where the partition begins. For anyone who might have the same question in the future, here is what I did. From the unRAID command console, display partition information of the vdisk: fdisk -l /mnt/disks/Windows/vdisk1.img I was after the values in red. The output will looks something like this: [pre]Disk vdisk1.img: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xda00352d Device Boot Start End Sectors Size Id Type vdisk1.img1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT vdisk1.img2 206848 41940991 41734144 19.9G 7 HPFS/NTFS/exFAT[/pre] To find the offset in bytes, multiply the sector start value by the sector size to get the offset in bytes. In this case, I wanted to mount vdisk1.img2. 206848 * 512 = 105906176 Final command to mount the vdisk NTFS partition as read-only: mount -r -t ntfs -o loop,offset=105906176 /mnt/disks/Windows/vdisk1.img /mnt/test
    1 point