Jump to content

tomjrob

Members
  • Content Count

    41
  • Joined

  • Last visited

Community Reputation

0 Neutral

About tomjrob

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Upgraded to 6.7 from 6.6.7 this morning. Upgrade was very smooth. All Dockers and VM's appear to be running without any issues. Really like the new dashboard. Thanks to everyone on the team. Great Work!
  2. (Solved) Thanks for clearing up the confusion
  3. Update: The Parity Rebuild process completed on time with no errors. Stopped all services and rebooted. All is up and running, and no errors showing in the log yet. I am monitoring and will post update after some time has passed. I have no idea how errors were happening on a device that was not in the array, so any insight is appreciated.
  4. Any insight to this problem below is much appreciated. Began the process of rebuilding parity after removing a data drive earlier today. (Removed drive 1 after moving all data off of it). Did a "New Config" and started the array to do parity rebuild. Almost immediately, began seeing the following error in the syslog. Apr 3 08:12:44 TOWER ntfs-3g[30147]: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Apr 3 08:12:44 TOWER ntfs-3g[30147]: Failed to read index block: Input/output error Apr 3 08:12:44 TOWER kernel: Buffer I/O error on dev sdl1, logical block 61047301, async page read I am baffled because there is no device sdl1 in the array, and as far as I can tell that device wasn't there even before the new config. I copied all device assignments down prior to doing the New config, and no sdl1. As of now there is still no device sdl1, so how can there be a failure on a device that does not exist. In addition, the parity rebuild is continuing despite the errors, and appears to be running fine and scheduled to complete in about 50 minutes. This would put entire rebuild time around 10 hours for 3 x 4TB and 1 x 2TB array, well above the 100mb/sec mark. I am attaching diagnostics which show the errors and sequence of events. Lastly, I am running 6.6.7. Thanks in advance for any help. tower-diagnostics-20190403-1635.zip
  5. (Solved) After deleting the docker image, and recreating it at 20G, it appears that all of the errors have been resolved and all containers are working properly. Did BTRFS scrubs (both cache and Dockers), and no corruption identified. The procedure recommended to recreate the containers worked perfectly, and all data within the containers is intact. Not sure what I was doing wrong in the past, but a big thank you for all of the support. Not sure if I need to do anything additional to close this, but if so let me know and I will do it. Want to follow the forum rules.
  6. Thanks for the suggestions. I have never had the docker image fill up. I just made it that large because I had the space. As for the plugins, I usually keep them all up to date. Just checked now, and the only plugin that is flagged as having an update available is the Dynamix System Stats. That one cannot be updated at this time because I am not running Version 6.7 yet, which is a prereq. I will take your suggestion and delete the image and recreate it at 20G. One point of confusion for me is that in the past I have had issues recreating containers. Sometimes the container will be recreated, but all of the data is gone. It's as if I just downloaded it for the first time. This is a real problem in this situation as there are many settings and much customization done to these containers and it will take a long time to get them set up again from scratch. I am wondering if I am doing something wrong in the procedure. One note is that I do not use the default appdata folder for my dockers. I have a folder on the cache pool called "Dockers, with subfolders for each container. Is this a supported method? If so, can you provide guidance on the procedure to get the containers back? Seems simple enough. My procedure is: 1. Stop the docker service. 2. delete the image. 3. Select new size and restart docker service. 4. Go to Apps/Previous Apps 5. Select "Reinstall Button for each container. Shouldn't this re-install the containers with the data and customizations intact? I will wait to take your suggestion until you can verify the procedure. Thank you very much for the help. Long time UNRAID user and love the OS and the great support group here.
  7. Thanks for the help. Attached is the Diagnostics zip file. tower-diagnostics-20190325-1421.zip
  8. Not sure if this is the correct place to post this, but am looking for some help. I have been using 4 binhex dockers for awhile now with great success. They are: sabnzb, Radarr, and Sonarr, and Emby. First up, a great big thanks for the great work on all of these. Upgraded to Unraid 6.6.7 a couple of weeks ago, and all was well until today. Got notifications today that there were upgrades available for sabnzb and radarr. Performed the suggested upgradestower-syslog-20190325-0907.ziptower-syslog-20190325-0907.ziptower-syslog-20190325-0907.zip, and now none of the binhex dockers work. All 4 binhex dockers fail to start. I saw some btrfs csum errors on my cache pool (mirrored 256GB SSD's), so did a scrub. All errors were corrected and now the scrub runs clean. However the binhex dockers still do not start. All other dockers (Makemkv, Handbrake, Diskspeed, Crashplan Pro) still work fine. I have also rebooted the server 2X, and no resolution. I am attaching system log file which still show some csum errors when trying to start these dockers. Any hep is greatly appreciated. Please let me know if I need to provide any additional info.
  9. trurl, Thanks for response. Not sure what you are suggesting. Should I stop the rebuild, put Disk 5 back into the array (out of esata enclosure), experience the Red X, and then do what? Diagnostics hangs, so should I try running the diags from the array console?
  10. Long time happy UNRAID user. This problem has me stumped, and I cannot tell if it is a hardware or UNRAID software problem. Yesterday, I found that Disk 5 in the array had a red X. This has happened before occasionally, and every time it does, I have found that the actual drive experiencing the error show no hardware problems. Have always been able to use partition magic on another system to wipe the partition, do extensive testing including surface test and find no errors. I can then introduce it back into the array the next time the red X occurs on Disk 5, and parity rebuild completes without error and the array runs fine until the next time it happens. Have been swapping (2) 2TB drives like this for over a year, and the problem only happens very occasionally. It is ALWAYS Disk 5. Yesterday, when the problem happened, I tried swapping Disk 5 again, but this time the new drive did not work. Immediately had red X on new drive. Here is what I have done to isolate. Tried a third drive in Drive 5 slot. Same Red X. Replaced the cable from the controller, which is an LSI 9211-8i. Same Red X. I am using only 2 ports of the controller, so I moved the cable to the other half of the controller, and no change. Same Red X. Swapped the power plug, in case of power problem. No fix. Tried introducing a spare 4TB drive into Disk 5 slot, and the array recognized it, but when I went to start rebuilding parity, it immediately failed with RED X. At this point, UNRAID would not allow me to put a 2TB drive into the slot, saying it was too small, so everything subsequently is done with the 4TB drive. Next step was to try and use an external esata enclosure to house DISK 5, instead of attaching to the LSI controller. Put the 4TB drive into the esata enclosure, and everything worked! Did a complete parity rebuild (11 + hours) and the array returned to normal. However, I do not want to have the array drive housed in the esata enclosure. I use that for unassigned drives. This is where it gets really weird. This morning I put another 2TB drive into the array, attached it to the same LSI port that was getting the errors, and it worked great as an unassigned drive. There is a lot of data on the drive and I could read it all just fine. Based on that, I assumed the port, cable, power, etc. was good, so I took the next step, shut down the array normally, and removed the 2TB drive, and moved the 4TB (array Disk 5) drive back from the esata enclosure to the spot where the unassigned drive was working. As soon as I booted the array, DISK 5 got red X and errors! Back to square one! I moved it back to the esata enclosure, did the procedure to have UNRAID "forget" the serial number, and reintroduced the same drive as Disk 5 again, and it is currently doing the parity rebuild without error. Bottom Line is that it seems that the array cannot use any LSI port for Disk 5, even though Disk 4 of the array is attached to it and working fine, AND any drive seems to work fine attached to the same port as long as it is an unassigned drive and not part of the array. I have uploaded the hardware profile of my setup to Limetech. Finally, I tried to download diagnostics while it was failing with the Red X, but it just hung. I did a screen capture of the Syslog at failure time, and I am attaching it. I am at a loss for next steps, so any help from the forum is appreciated. I am afraid to try to move Disk 5 out of the esata enclosure now, because every attempt if unsuccessful means an 11 hour rebuild before trying again. Thanks in advance. Tower Syslog Snapshot.docx
  11. I have been trying to use this docker tool to create snapshots of my VM's. This appears to work great, and I have verified that internal snapshots are created. However, after rebooting, none of the snapshots appear in the virt manager gui anymore. However, the snapshots created are still present in the QCOW2 file of the guest. I have verified this by issuing a qemu list command, and all snapshots are still there. They just do not show up in tool anymore. So my question is how do I get the virt manager docker to keep track of the snapshots it created after a reboot happens.
  12. Parity check speed was 107.9MB/sec after disconnecting DVD drive. The Mobilstor external esata device did not seem to affect the speed, as I turned it off and on during the check and speed did not change significantly. I will isolate or remove the DVD drive going forward. Much thanks to BobPhoenix for the assistance.
  13. Bob, Thanks for the quick reply. I do have a dvd drive attached to the lsi card. I believe the "port multiplier" you are referring to is an external drive bay (Mobilestor unit) which is attached via esata to the unraid server. I only use it for "unassigned devices" which are not part of the parity protection devices, so didn't think that would cause issues. However, if it is causing resets, I will remove it. So the plan is to disconnect the DVD drive, and turn the Mobilestor unit off and run a parity check and compare performance. Does that make sense?
  14. Additional Info: Screen shot of parity rebuild showing elapsed time
  15. I am upgrading my parity drive from 2TB to 4 TB. Problem is the throughput seems very low (35mb/sec to 44mb/sec). All drives in the array are 7200 rpm and a mix of sata 2 & sata 3. When doing preclear, the new 4TB drive was considerably faster (106mb/sec - 129mb/sec), so I am wondering if someone could shed some light on this. The drive is attached to same sata port for both the preclear and the parity rebuild (internal motherboard port on ASRock 970 Extreme 4) Diagnostics and preclear report are attached and I've uploaded hardware profile to Limetech as well. Thanks in advance for any insight. tower-diagnostics-20180519-1110.zip preclear_report_WOL240382382_2018.05.17_19.19.26.txt