• Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Cessquill

  1. As a thought, I was searching yesterday and discovered that the onboard LSI 2308 was running old firmware in IR mode... LSI Corporation SAS2 Flash Utility Version (2011.11.08) Copyright (c) 2008-2011 LSI Corporation. All rights reserved Adapter Selected is a LSI SAS: SAS2308_1(Rev 5) Controller Number : 0 Controller : SAS2308_1(Rev 5) PCI Address : 00:02:00:00 SAS Address : 5003048-0-11b9-9100 NVDATA Version (Default) : 0f.00.00.12 NVDATA Version (Persistent) : 0f.00.00.12 Firmware Product ID : 0x2714 Firmware Version : NVDATA Vendor : LSI NVDATA Product ID : SMC2308-IR BIOS Version : UEFI BSD Version : N/A FCODE Version : N/A Board Name : SMC2308-IR Board Assembly : N/A Board Tracer Number : N/A Finished Processing Commands Successfully. Exiting SAS2Flash. The last version I could find was Regardless of whether it fixes my issues, would it be recommended to upgrade to the latest version and switch over to IT mode? The motherboard preceded switching to a SAS backplane, and since it worked I didn't give it a second thought. @JorgeB - in researching I also found your excellent post about flashing firmware, thank you
  2. Thanks, I've set the spin down delay to never. I'm hoping it's not a further LSI/Ironwolf issue, since all drives have the fixes (and the drive that dropped off, and previously disk 19 are ST8000VN0022 which didn't initially suffer from that). That said, if it's not that I'm stuck. The PSU should cope with all drives turning on.
  3. Hi - ongoing issue that I'm trying to diagnose. Once every week or so a drive will randomly drop off the array. Not the same drive - appears to be random (from what I can tell). See first diagnostics taken just after it happened this morning. When it does happen, I can stop VM and all dockers, but I can never stop the docker or VM services (pizza wheels for a long time). Following that, I can not stop the array with the GUI. See second diagnostics taken automatically because I have to give it an unclean shutdown over IPMI. Points to note All data drives (except Parity & SSDs) are connected to a SAS backplane It seems to be an Ironwolf drive that drops off (although I am aware of the LSI issues with them and have set up the drives as per recommendations) Every time it happens I can not stop the array I have no idea what it could be with this, and would really love to solve it - it seems to be running a drive rebuild a lot of the time unraid1-diagnostics-20230919-1021.zip unraid1-diagnostics-20230919-1125.zip
  4. @Squid - For me at least, I get the following error on page load... (Chrome) ace.js:1 Uncaught TypeError: Cannot set properties of undefined (setting 'packaged') at o (ace.js:1:144) at ace.js:1:1594 at ace.js:1:1612 o @ ace.js:1 (anonymous) @ ace.js:1 (anonymous) @ ace.js:1 dynamix.js?v=1680052794:5 jQuery.Deferred exception: ace.edit is not a function TypeError: ace.edit is not a function at HTMLDocument.<anonymous> (https://192-168-1-10.878757bd53f71ad14272183dbae65d47ceb4439a.myunraid.net/Settings/Userscripts:1218:20) at e (https://192-168-1-10.878757bd53f71ad14272183dbae65d47ceb4439a.myunraid.net/webGui/javascript/dynamix.js?v=1680052794:5:30310) at t (https://192-168-1-10.878757bd53f71ad14272183dbae65d47ceb4439a.myunraid.net/webGui/javascript/dynamix.js?v=1680052794:5:30612) undefined E.Deferred.exceptionHook @ dynamix.js?v=1680052794:5 t @ dynamix.js?v=1680052794:5 setTimeout (async) (anonymous) @ dynamix.js?v=1680052794:5 c @ dynamix.js?v=1680052794:5 fireWith @ dynamix.js?v=1680052794:5 fire @ dynamix.js?v=1680052794:5 c @ dynamix.js?v=1680052794:5 fireWith @ dynamix.js?v=1680052794:5 ready @ dynamix.js?v=1680052794:5 $ @ dynamix.js?v=1680052794:5 dynamix.js?v=1680052794:5 Uncaught TypeError: ace.edit is not a function at HTMLDocument.<anonymous> (Userscripts:1218:20) at e (dynamix.js?v=1680052794:5:30310) at t (dynamix.js?v=1680052794:5:30612) ...and then when clicking on edit script... Userscripts:1491 Uncaught TypeError: ace.edit is not a function at Object.success (Userscripts:1491:24) at c (dynamix.js?v=1680052794:5:28599) at Object.fireWith [as resolveWith] (dynamix.js?v=1680052794:5:29344) at l (dynamix.js?v=1680052794:5:80328) at XMLHttpRequest.<anonymous> (dynamix.js?v=1680052794:5:82782) Hope this helps.
  5. First port of call - change your PiA credentials - you've just posted your username/password. I'm guessing you've upgraded to Unraid 6.12.x? I haven't but the posts directly above yours are reporting issues.
  6. How are you accessing Nextcloud remotely? In my case, I route through NginxProxyManager, and I needed to set the below in the Advanced tab... ...it's mentioned near the bottom of the docs https://docs.nextcloud.com/server/20/admin_manual/configuration_files/big_file_upload_configuration.html?highlight=max upload size#:~:text=The default maximum file,2GB on 32Bit OS-architecture (note, I set this up several years ago, and can't remember whether there was anything else. If you're routing via a different method, I'm not sure but would guess it's something similar)
  7. I may have got this *completely* wrong, but my understanding of Unraid Dockers for this situation is this... All variables/paths, etc. are created when you initially install the docker. Your template is a snapshot of what the author specifies at that time (which you can further edit as you wish). If the author edits, adds or removes options further down the line, your settings are not affected when you update. For example, I manually added a transcode folder at some point., and I'm not sure how an update would handle this. If this is completely bad information, someone let me know and I'll delete it. Don't want bad advice in searches.
  8. Yep, that was the first of two commands in the script, the second one is the one I use. I can't quite remember, but I *think* the rem'd out command was the one to use in general Docker circumstances, and the second one the script actually uses is for Unraid.
  9. If you just mean running the Nextcloud's cron tasks periodically, I have the following in the User Scripts plugin set to run every 5 minutes #!/bin/bash #docker exec -u www-data Nextcloud php -f /var/www/html/cron.php docker exec Nextcloud php -f /var/www/html/cron.php exit 0 Disclaimer: this was set a fair while ago, and aside from updating the docker I haven't been diving too deep into Nextcloud, just using it.
  10. This thread is for the linuxserverio version of the Plex docker. You've got the plexinc version there. Also, it looks like you're transcoding directly to your array, which I would have thought would be a nightmare.
  11. I'm not sure whether this fix applies to your situation, but I hope your system is more stable now. I've not had another issue since this thread was created, but there may be different issues with different LSI cards - not sure.
  12. Yes. I have the former (a script that puts the Nvidia card into power-save mode). It's duplicated so that it runs on array start, and then hourly. (from a SpaceInvaderOne video)
  13. Yes, but on large libraries you're talking about hours every backup where dockers are off while millions of files are backed up (I estimate that due to backing up my entire Plex folder, my dockers are down for around 10 days every year). Additionally, the amount of storage these files take to back up. Touch wood, I've not had to restore Plex before, but I've refreshed all metadata before and it's not the end of the world.
  14. Thanks for this - had the same problem when upgrading, although I'm not sure whether it didn't coincide with Windows updates, as I reverted back to 6.11.1 from a Flash backup and still couldn't access the shares. Adding "ntlm auth = Yes" fixed it for me.
  15. Fair enough. In my circumstance, the specific configuration of a proxy host in NPM required the site it was pointing at to be running before it would start. If you're sure that's not the case I'll butt out.
  16. Check what other dockers NPM relies on (specifically, more complex sites configured). When I was running a site (the Jitsi docker, I think), if it wasn't running then NPM would never start. If Jitsi was running, NPM would start fine. My guess is that CA is correctly starting NPM, but NPM is shutting itself down, as something it relies on is not present. A few notes: I had this a few years ago, but no longer needed Jitsi (if it was that), so didn't find a solution. I don't know whether CA Backup/Restore adheres to the docker system's ordering and delay options when restarting containers. It's easy to test - shut down dockers one at a time that NPM is linked to and restart NPM. If it doesn't restart with a specific docker stopped, it's likely that this docker is not (fully) running when NPM is restarted following a backup.
  17. It also left my four dockers in question with an unknown update status. That was fixed with a force update on them.
  18. This happened to just earlier too. I got a command failed next to every update and no "done" button on the modal dialog (which leaves a dead page). Four orphaned images to remove, but it looks like the containers updated OK. Was on the way out so didn't get a chance to log or report.
  19. Three posts above yours. With me, it was making sure no other disk activity on my main AppData drive slowed down anything that Plex was doing. I've currently got a lot of read/write on that drive which is slowing my dockers down (trying to track down the culprit).
  20. Unbalance copies then deletes the source files (similar to how Midnight Commander does). If you're moving a lot of data, there may be a cleanup period at the end where the original files are removed.
  21. Absolutely, yes. Thought of that as I was typing - apologies. Fix Common Problems would still warn against a share across multiple pools though, yes? EDIT: I've now gone and set all dockers to point to /mnt/docker/.appdata... Turns out I had already pointed Plex to /mnt/plex/appdata, so I'd obviously started at some point. Also to note, stopped the docker service and set the default docker location to /mnt/docker/appdata
  22. Yep, this is how I have mine set up... An SSD with a cache pool named "Docker" An SSD with a cache pool named "Plex" An "appdata" User share on the Docker pool. All docker appdata is in here except Plex There is also an "appdata" folder on the Plex drive. All Plex appdata is in here "/mnt/user/appdata/" as the Appdata Share (Source) in Backup & Restore sees the folders on both Docker and Plex All docker templates point to /mnt/user/appdata/<docker folder>, but I guess they could point to /mnt/docker/appdata (no idea) Downsides.. Occasionally, a "PlexMediaServer" folder appears on my docker drive - have to move it to the Plex drive (no problems with service, and happens very rarely) Fix Common Problems plugin issues a warning that appdata files/folders exist on the Plex pool. I've just ignored that thinking there's currently no way around it (unless there is and I don't know)
  23. Thanks - two other disks of the same model had spun up, as had the parity. I'll try swapping them around and see what happens.