Trigonal

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Trigonal's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Just upgraded from 6.8.3 with older nvidia plugin to 6.9.1 without any issues. Upgrade steps for me were; Backup flash + screenshot drive allocations Stop all docker containers & disable autostart Stop all VMs & disable autostart Uninstall old nvidia plugin & ensure all other plugins were updated Upgrade to 6.9.1 Reboot Install @ich777's nvidia driver plugin, wait for install. Disable then re-enable docker. Check GPU uuid in settings, confirm it was the same under jellyfin container Reconfigure auto starts on containers and VMs. Start containers ...... Profit Thanks to all involved for another smooth upgrade.
  2. You may want to switch the direction of the rear CPU heatsink fan It looks like they are set up to exhaust into one another.
  3. Hey, - Yes your nvme ssd can be used for temp download directory + plex data. When creating shares you'll just need to assign them to the cache drive only. - You shouldn't need to worry about mover with a typical sonarr / radarr + download client setup as the media management program handles the moving once the download is complete. Your download directory can be on the cache drive and your media directory will be on array but as they mapped inside sonarr / radarr it is not aware of the difference and will just move as normal. - Yes you will need to start with freshly formatted drives (depending on your setup I would also configure an encrypted array)
  4. With unassigned devices installed you should be able to connect & mount your external hard drive then copy files using a GUI file manager like the Krusader docker or even CLI if your comfortable with that option. This video should cover the essentials.
  5. @trurl Just restored backup from last month, removed super.dat, reassigned disks to required slots and everything is back up and going. Thanks heaps for the tip
  6. @trurl Thanks for the tip i will try that tonight first before performing a fresh install as i'd rather not have to reconfigure plugins and stuff. Array shouldn't be able to autostart as it is encrypted.
  7. @Gragorg Thanks, I missed that part in the documentation. Will start the process of a fresh install and setup ASAP.
  8. Hey all, Today I found that my server had become unresponsive and I was unable to perform a graceful shutdown either via web interface or over SSH. So once I got home I had to do a hard reset. Once the system restarted I found that the boot mount was no longer able to be mounted on startup. I plugged the usb drive into another machine and tried to copy the config directory but unfortunately the config files and all sub-directories are returning CRC errors when attempting to copy. On a whim I also tried to image the drive but that was unsuccessful. I have a flash backup from roughly about a month ago but since then I have upgraded the 2 parity drives and moved the old drives into the array so the old config is no longer "plug and play". It appears that I forgot to take another back up after this hardware change 😧 Is it safe to assign the drives to their required slots on the main tab or will that be putting data at risk on those drives? If so what is the best approach? ** Update ** - Was able to get the what looks to be most of the config files from the root config directory and sub-directories. What config file assigns the disk id to their role in the array / parity? ** Update 2 ** I've just read on forums that restoring from an older backup with mismatching drive configuration is not a good idea. Is the best approach to proceed with a fresh install of unraid? If so how are you able to "import" the existing data on drives? Thanks!
  9. Quick update. Replaced the dodgy drive and all is good now. Thanks again for the help
  10. Thanks for the help. I've updated HBA to latest firmware and also checked all connections to drives and backplane. Ran another manual check last night and parity drive 2 died in spectacular fashion with over 7000 reported bad sector errors so i've shut down the array until I can source another temporary replacement drive locally.
  11. Sorry should have mentioned that monthly checks were originally set to correcting but have now set it to non-correcting. So monthly scan found and fixed 1732 errors on the first then 4 days later 3926 errors were found. No unclean shutdowns that I am aware of and system is on a UPS with unraid configured to shutdown if runtime falls below a set time or battery drops below set percentage. Please find diag zip attached theblackbox-diagnostics-20200605-1653.zip
  12. Hey All, I've been running Unraid for about a year now but recent parity checks have returned some worrying results. I run a scheduled monthly check and for the first 8 months or so everything was fine and it resulted in 0 errors but the last few months have been getting increasingly bad; 01/04/2020 = 0 Errors 01/05/2020 = 120 Errors 01/06/2020 = 1732 Errors After this most recent monthly check I ran a manual check last night and the test returned a result of 3926 errors and also a SMART warning for one of the parity drives ("reported uncorrect is 1"). Since the monthly check and last night I would estimate another ~40GB of data was added to the data drives. Quick background on the installed drives; Data Array 2 x 3TB Seagate Ironwolf (Model: ST3000VN007) 3 x 3TB WD Red (older non SMR drives, model WDC_WD30EFRX) Parity Array 2 x 4TB Seagate Barracuda (Model: ST4000DM004), pulled from external USB enclosures Controller LSI 9200-8i (I suspect this to be a Chinese copy) I suspect that the 4TB Barracuda drives are not up to the stress of parity and 24/7 operation and was going to order 6TB Ironwolf drives but wanted to get the your opinion before spending any cash. Thanks
  13. I would agree with this as docker support and the ability for hardware transcoding was the main factor in switching my home system from FreeNAS to Unraid despite the concerns around reduced speeds, multi-array support, security and all the fantastic ZFS benefits. Native driver support would be great as it would remove the dependency of relying upon the LinuxServer team as lets face it they are volunteers and priorities can change or projects can fall to the side. (I do not mean any disrespect to LS.io and are extremely thankful for all the effort you've put in)