Abnorm

Members
  • Posts

    86
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

1554 profile views

Abnorm's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. Strange, I cannot remember if I've changed it at any point, it might've been waay back before squids plugin showed up to utilize the performance benefits. But in my usecase it has actually made a difference as disks are spinning down and not spinning up again as expected now.
  2. Ok, might've found something, in settings -> disk settings, I set 'Tunable (md_write_method)' to AUTO instead of 'reconstruct write', disks hasn't spun back up as of yet, will monitor it for a few days and report back.
  3. The same issue here on 6.9.2, did not have this issue before, removed the turbo write plugin, did a manual spin down, 30 seconds later it spins back up; Oct 6 10:31:57 BlackBoX root: Stopping CA Turbo Mode Oct 6 10:31:57 BlackBoX root: Terminating 31523 Oct 6 10:32:05 BlackBoX flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php update Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdm Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdj Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdk Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdh Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdg Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdd Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdt Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sde Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdb Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdr Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdf Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdc Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sds Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdn Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdq Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdo Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdl Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdi Oct 6 10:32:11 BlackBoX emhttpd: spinning down /dev/sdp Oct 6 10:32:14 BlackBoX emhttpd: read SMART /dev/sdj Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdm Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdh Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdg Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdd Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdt Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sde Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdb Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdr Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdf Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdc Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sds Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdn Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdq Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdo Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdl Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdi Oct 6 10:32:46 BlackBoX emhttpd: read SMART /dev/sdp Oct 6 10:32:47 BlackBoX emhttpd: read SMART /dev/sdk
  4. Hey, just a quick question, would it be possible to add some functionality to force unmount stuff? Especially having issues with NFS where it sometimes crashes on the nfs-server, unable to remotely restart my unraid box since it never stops trying to unmount. Have to cycle power (not a big deal, it only runs a few dockers). NFS at least has a lazy mode unmount switch which would solve this perhaps.
  5. Foreword; not relevant to SMB I seem to get this issue now, never had it before. My setup is like this: I have a Debian VM running on my main unraid box, this has rar2fs installed. On this VM I mount my media shares via NFS3 first, then I point rar2fs to those shares which in turn mounts the unrared/fusefs files to seperate folders which in turn is shared via NFS3 to my secondary box. The secondary box mounts these shares with the unassigned devices-plugin with NFS3 which in turn are mapped in the plex docker for my media locations. When doing some troubleshooting I've found that the rar2fs/fusefs stuff works fine, no processes are crashing or whatnot. It seems that the initial NFS3 mount between the main unraid box and the VM goes stale. When that happens the unrared media is not available in plex (naturally). But there are only 2 folders this happens to, they are not the biggest, they're the smallest, which is bizzarre. This has worked perfectly before i upgraded to 6.9/6.9.1, so I wonder what actually has changed here that can be related. Turning of hardlinks did not work, allthough worth mentioning I have not rebooted the servers after disabling this. I will try that tomorrow just to make sure. PS: I've also tried changing allowed open files to an astronomical value without any difference. As this seems to be somewhat relevant to why you get stale file handles.
  6. read the documentation. webui can be reached at http://<yourip>:8080/pwndrop
  7. you might be able to reconfigure the log location, adding a location in your variables but you'll need to look into the unms config files i presume. I have not tried it myself, maybe @digiblur have some ideas ? Maybe it would be possible to setup the docker to let us configure log locations ourselves?
  8. Had this exact issue today, I've recently changed from a Raid0 w/2 ssd-drives to a single ssd. Had to reformat my cache drive to fix this as mentioned earlier in this thread. Strange issue but whatever, it works again ! Ramblings; Just had to delete my docker.img file and redownload the dockers (you'll get a notice about this when you try to enable your docker service after the reformat and backup restore) Oh and just a side note, I took a backup of the data with: rsync -a --progress /mnt/cache/ /mnt/disk1/temp before formatting of course. (reverse when copying back). And just a note, plex database files failed to copy, i suspect since the cache got "full" it somehow corrupted the db-files. Thankfully my plex takes its own db backups so just had to roll back to latest backup by renaming the affected .db files, all good now.
  9. Hey guys, first of all, thank you @Ich777 for all your hard work, this is a great addition to any unraid deployment! So me and a few friends started messing around with Arma3, it's really fun but without any mods it's, well.. Arma.. Any idea how you define specific mods to be added to a server deployment? I see theres a "Mod" field in the docker config but no idea how to use it. Would love some pointers, I though maybe if the account connecting to steam are subscribed to specific mods in the workshop that it would sort it out itself. Thanks again for this great addition, have a good one!
  10. Hey all, I'm in the process of moving my server to a new rack chassis and needed some sff-8643 compatible controllers for the backplane. Bought two Dell WFN6R via ebay and this worked for me: https://jc-lan.org/2020/01/09/crossflash-dell-0wfn6r-9341-8i-to-9300-8i-it-mode/ You'll need to short the TP12 jumper Firmware and tools can be found on broadcom's site: https://www.broadcom.com/support/download-search?pg=Storage+Adapters,+Controllers,+and+ICs&pf=Storage+Adapters,+Controllers,+and+ICs&pn=SAS+9300-8i+Host+Bus+Adapter&pa=&po=&dk=&pl= You'll need the following files: Installer_P16_for_UEFI.zip - uefi flashing tool 9300_8i_Package_P16_IR_IT_FW_BIOS_for_MSDOS_Windows.zip - firmware etc. use UEFI, dos mode didn't work for me Hopefully this helps someone Cheers!
  11. Add this to extra parameters in the docker config page (you need to click "advanced view") "--log-opt max-size=10G" without the quotes, this will append the log when it reaches 10Gb. You can set this to what you want though. It is normal when the log is allowed to grow as it pleases.
  12. Well it worked perfectly here, every update from 5.x to 6.8.3 has never given me a single issue. This has to be in your end. Post diagnostics, stop bashing unraid for your incompetence and behave like a rational person.
  13. Great, thank you for the input
  14. Hey, I've been running 2x SSD's in raid0 for quite a while now, and I'm getting some write errors on one of the drives as they're getting pretty old. I've got a single 1 tb ssd replacement (getting the second one later). So, how do I go about changing my cache to not use raid any more ? Should I just change the mode to "single" or will this delete the existing data? I'll do a backup first of course. Thanks!