hernandito

Community Developer
  • Posts

    1480
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by hernandito

  1. Thank you @JorgeB. I powered down the server; disconnected one of the new array drives, and upon reboot, array is stopped citing "configuration change". EDIT: I did the same with the other drive and got the same results.
  2. Thank you. I stopped the array to prevent anything being written. I could run the extended smart test on the bad parity.
  3. Diagnostics attached.... Thank you. tower-diagnostics-20230910-1545.zip
  4. I ran the two drives without the -n for about 24 hours and at the end it simply said “exiting now”. I bought another 16TB drive yesterday that I’m pre-clearing. It will take time… after which I will reboot the server and create the new diagnostics. thank you.
  5. Hi, I gave a try to make the Docker Folders behave like the above screen…. Did not have much luck… @scolcipitato I think there needs to be some code to separate Folders from Dockers. I assume the challenge is how to put folders together continuously, when the docker lists are closed, and separate them when some/all Folders are expanded (like image above.. I would keep trying but my server is in bad shape. I had three drives fail within days. One of them being a parity… two of the drives well within warranty. I fear data loss of about 20TB…. ; ( Shameless attempt to get more eyes on my issue (please don’t respond here. I don’t want to hijack this thread):
  6. Thank you Jorge... I repaired Disk 10 using xfs_repair on a separate computer (using PartedMagic). It ran without issues. I put back in my unRAID on a different sata port/card and seems to work OK. Ran xfs_repair in Maintenance mode a couple times and it looks happy. I am currently running xfs_repair on both the new array drives (6 and 15). Its been running for a couple hours and this is what I see Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .... Those dots ...... at the end, scroll horizontally for a long time. The read count on the drives keep going going up, so I will leave it for a while. These two were the drives that unRAID rebuilt with one disabled parity drive. It ran it too fast (like in 6 hours) and I don't think it did a proper job. I still have one of the Parity drives showing as disabled. I will wait for something to happen to the currently running repairs, reboot and run a new diagnostic. Thanks again. H.
  7. Thank you Michael. Small update. I have changed all the cables and the situation is still bad. I am attaching diagnostics... Once again I am having to fs-diskrepair the same drive I did before. It keeps getting corrupt and becomes unmountable. When the Array is mounted, it is offering me to format the two replaced array drives. I don't want to do this until I find out if there is a way to rebuild something. Please help... not sure how I can get some of my data back. Thank you, H. tower-diagnostics-20230908-2208.zip
  8. Hi Guys, Pretty desperate. Long story short. I have had a lot of drives fail in the past week - Three so far. I suspect issue of bad cables. I have ordered new Sata and Sata power cables. Awaiting their arrival. My server has 14 array disks and two parity drives. One nvme cache drive and 1 ssd cache backup drive mounted via Unassigned Devices. At one point, one of the array drives showed as failed (a 16td drive bought 8 months ago) and then another old 4tb showed as failed. I orderd new ones . I replaced the 16th and the 4Tb (with an 8Tb). While pre-clearing, I was moving around some files. One of my parity drives shows SMART errors and eventually failed. The error was the attribute Disk Shift... it had a fairly high count of around 500,000. After pre-clearing, I assigned the new drives to the two array slots. After rebooting, I also had another drive showing data corruption. I was able to run fsdisk repair, and brought it back. Then, while still in Maintenance mode (in Safe Mode), there was a SYNC button for Data Rebuild (I cant recall the exact label). I hit that, and it started re-building. It went somewhat fast... like 6 hours to rebuilt 16TB... I suspected this was not right. I let it finish, and when starting array (in non-safe mode), indeed the two drives show as un-mountable. I took the failed Parity drive to another machine and ran a Short Test on it. It showed no errors, and still the high Disk Shift count. Nothing else seems abnormal. I took the drive and put it back in the server. Booted and it still shows disabled (missing). As of right now, I have one disabled Parity and the two newly assigned drives. I have shut down the server, while awaiting the new cables. If I start the array, the two new array drives have been assigned. If I un-assign them to no device, and re-add them. Same thing happens when I start the array. The whole thing started when I upgraded the cache and cache backup to larger capacity and do the ZFS pools conversion for both. I then tried to re-use the old nvme and ssd to create a future ZFS pool. Since my motherboard has only one nvme slot, I bought and installed an nvme to pcie adapter card which I plugged into an available slot in the mb. It all seem to work fine... I worry that it was this card that messed things up. I have since removed it. In essence, I don't know if this is the culprit for all the issues. Questions: Is there a way to bring back the one parity back so I can rebuild those two drives? How can I make the two new array drive start a data rebuild? I know that you need two Parities to re-build two array drives. What can I expect if the one bad parity cannot be brought back? Any thought what could be causing all those failures.? Any ideas to resolve the potential underlying issue? For SATA controllers, my Supermicro MB has three mini SAS connectors with SF-1087 to 4 sata data connections each. I also have one of those LSI / Adaptec (cant recall) cards in a PCIE slot w/ 2 SF-1087 to 4 sata cables each. This card has been working for years. Waiting for Amazon to deliver, but not hopeful that this will be the right solution. Any advice is greatly appreciated. I am not sure how / what I need to send one of those diagnostic test results... if needed, please point me in the right direction. I recall some ages ago having to install something in my Windows machine so that unRAID dumped the logs into it. Thank you!! H.
  9. Glad someone is using my examples... I was not sure if anyone knew of my customizations. I even managed to add some css browser width detection. If I am on my tablet, it gets more compressed than when on my desktop.
  10. A CSS for this would be nice... Times New Roman font is so 1995. Some little color accent would be nice too. Thank you for this Squid! H.
  11. If you like to tinker.... you will have a blast...! Enjoy.
  12. You pointed me in the right direction! I went to the Disk Shares section, I clicked on the Cache drive under the disk shares and: This solves it... Thank you very much!
  13. Thank you Itimpi, In my Settings > Global Share Settings, it looks like everything is shared... At the moment, I cant stop the array as it is in the middle of copying the old appdata into the new drive. Boy that PlexMedia appdata folder is something serious 🙃 Any ideas? Thank you again, H.
  14. Hi, I am an unRAID veteran but a total newbie when it comes to ZFS. I upgraded my cache drives from a 1TB to a 2TB nvme drive. I copied the entire contents to another drive. Then I stopped the array, turned off and installed the new drive. When it re-started I deleted the Cache pool drive from the old drive. I created a new cache pool and formatted it as ZFS. Started the array and all seems good. Currently copying the content back. Before, from my WIndows PC, I could navigate to the root of the cache drive... \\TOWER\cache But now cache is not a share. How do I get my cache drive as a network share? Thank you! H.
  15. Better yet... this will not change the buttons at the bottom of the page... Staring at line 36 of the file, replace that section with this: button[class*='dropDown-'] { margin-top: 5px !important; margin-right: 3px !important; color: #4f4f4f !important; background: #e6e6e6 !important; border: 1px solid #999 !important; }
  16. Add the below code to the end of the "docker-custom.css file. input[type="button"], input[type="reset"], input[type="submit"], button, button[type="button"], a.button { color: #4f4f4f !important; background: #e6e6e6 !important; border: 1px solid #999 !important; }
  17. Must be because you are using Dark Mode as your web UI... I will try to play with it and see if I can figure it out.
  18. I have updated my "custom css" files to minimize the "Uptime" column that was taking up valuable horizontal space. I also stylized and made the CPU/Memory Usage column narrower when using "Advanced View".
  19. Some time back I posted a collection of animated icons for folder icons: Animated Icon Collections
  20. Hi Guys, Not sure if you have seen that one can customize the UI of the Docker page. @scolcipitato has been great in editing some of his code so that someone w/ some creative and CSS skills can do this. I cannot help myself, but I really enjoy tweaking these things. I have created a repo w/ some customizations (more than the image above). https://github.com/hernandito/folder.view.custom If you have any ideas or suggestions, please let me know here. If you make your own variations and feel like sharing, please post them. Enjoy.
  21. Figured it out... If you accept the default directory of /mnt/cache/appdata/swing-music this will not work. If you followed the terminal commands, your create the folder swing-music... but the terminal command you created swingmusic.... Take out the dash "-" in the default folder and you are good.
  22. Same issue here as well… but I’m not logged in on Dockerhub.