bobokun

Members
  • Posts

    224
  • Joined

  • Last visited

Everything posted by bobokun

  1. Okay good idea. I'll wait until the rebuild is finished. Swap the sata cable and do another preclear
  2. Is it safe to replace the cable while unraid is on if the drive is not mounted? (unassigned). I'm in the middle of a parity sync/data rebuild 27hours in and don't want to restart it and start the rebuild all over just for replacing the cable.
  3. I have a miniSAS to 4 sata cable that it was plugged into. Should I replace the whole mini SAS to 4 port cable?
  4. I got two CRC Errors on my new drive that I was preclearing /dev/sdd and then I got a Preclear Fail on Post-Read. I'm not sure what I should do...should I try to replug sata cables again and preclear the drive again? Here is the preclear log and new diagnostics... ppreclear_disk_JEHBR8AN_21792.txtunnas-diagnostics-20190924-2118.zip
  5. I added a new hard drive and was preclearing it. (Planning on replacing my parity) I wanted to move all the contents of my disk4 to disk5 since I was getting very slow read/write speeds on disk 4 ( Based on diskspeed docker) I was using the unbalance plugin to move all the contents from Disk4 to disk5 and it was moving at like 1-3MB/s. After waiting for 30minutes I got a bunch of notifications saying that the arrray has errors (Array has 6 disks with read errors) and Disk 5 had an error (Disk dsbl) alert. My Unraid GUI became unresponsive and I couldn't SSH into it as well. I force rebooted my server as that was my only option...When I rebooted I saw that disk5 had an x on it saying there was an error. I tried to rebuilt parity but my server froze again and right before it froze I saw that there were a bunch of messages saying "xfs metadata i/o error xfs_trans_read_buf_map unraid". I rebooted and now the x is back in disk 5 and errors on my last parity check. I'm performing a Read-Check. I'm scared to continue doing anything including preclearing my new drive and I'm not sure what is going on. Can you please check over my logs. thank you unnas-diagnostics-20190924-0347.zip
  6. I understand and that's why I've added them on my exclude folders (as displayed on the screenshot) but it's still being monitored? I must be configuring something wrong but I'm not sure how I should fix it.
  7. I've been trying to set up file integrity to get it to work but I'm not sure if I'm doing this correctly or not. I keep on getting BLAKE2 key mismatch on my libvirt.img and a lot of my nextcloud files: See example: BLAKE2 hash key mismatch (updated), /mnt/disk1/nextcloud/.htaccess was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/nextcloud/appdata_ocdoy1vwt49l/appstore/apps.json was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/nextcloud/appdata_ocdoy1vwt49l/appstore/categories.json was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/nextcloud/appdata_ocdoy1vwt49l/appstore/future-apps.json was modified BLAKE2 hash key mismatch (updated), /mnt/disk5/vm backup/libvirt.img was modified Are these true errors?? I have these excluded and clearing it but every time integrity check runs it always finds these files mismatched. Here are my settings:
  8. I've been using unraid for about 2 years now and I love the fact that it makes it so easy to set up docker containers and have everything running! Happy Birthday Unraid!!
  9. Deleting those files/folders worked for me as well. Thanks!
  10. Anyone else get this error when updating the nextcloud to the latest version? 16.0.2
  11. How did you get it to run on python 2.7 in the docker container? As we saw the docker container only contains python 3 and python 2 wasn't installed. Did you install it manually inside the docker container to get it to work? Since we are both running the same container I would assume that it should work for myself if you managed to get it to work
  12. I do have the script saved outside the container, and I also checked the python installed in the docker container is not 2.7 but 3.6.8. I do have the same docker container installed however I think the reason why it's showing 2.7 installed is because I'm running the script from unraid and not within the docker container. In unraid itself I have python 2.7 installed and it's connecting to rutorrent using the server_url variable in the script. It could be a permission issue but I'm not sure how to give permissions access to the script. When I copied the script I pasted it directly in unraid using the userscripts plugin I can edit the script directly. When running the script in a terminal window in unraid (not in the docker container) I am seeing the full error message, whereas previously it was truncated. Traceback (most recent call last): File "./script", line 18, in <module> torrent_name = rtorrent.d.get_name(torrent) File "/usr/lib64/python2.7/xmlrpclib.py", line 1243, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1602, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1283, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1316, in single_request return self.parse_response(response) File "/usr/lib64/python2.7/xmlrpclib.py", line 1493, in parse_response return u.close() File "/usr/lib64/python2.7/xmlrpclib.py", line 800, in close raise Fault(**self._stack[0]) xmlrpclib.Fault: <Fault -506: "Method 'd.get_name' not defined">
  13. Oh should I be saving this script inside the docker container? I Have the script saved outside and I was using the userscripts plugin to run it. Maybe that's why I'm experiencing the issue? If you have saved the script in the docker container how did you schedule a cron job inside the docker container to run the script?
  14. I realized what I did wrong, I forgot to rescan the controllers after I swapped the sata cable to connect to my motherboard rather than the SAS2008 controller which is why I was getting those errors when benchmarking.
  15. I ended up connecting more drives into my motherboard and fewer onto the Dell H310 card instead to see if that improves the results. Is this normal behavior for drives or do you still think there is issues with my Disk4/Parity?
  16. Are these all windows tools? Is there a way to run these using docker containers or plugins? Or should I run it in Windows VM. I've been using disk4 since before I installed my dell H310 and even with my dell H310 installed it was running fast, parity checks were fine. It's when I added two additional new drives (10TB x 2) is when the issues started happening. I'm wondering if there is an issue with my H310 controller since it is running 6 drives on it (2 SAS to sata cables, 4 sata connections on one and 2 on the other).
  17. Disk4 isn't a shucked drive. I tried to replace the sata cable for disk4 and parity and connect it directly to the motherboard instead of the DELL H310 controller but now when I run diskSpeed it hangs on disk5 with the message below. Now I'm wondering if there is an issue with my H310 card or if it's my SAS to SATA cables. SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]: Scanning Disk 5 (sdd) at 9 TB (90%) - 0|9999999999|0 (64) Not sure if this helps?
  18. How do I fix the xlmrpclib? Is this something that needs to be downloaded outside the docker? I have nerdtools plugin installed and python-2.7.15-x86_64-3.txz installed. Is there anything else additional I need to install?
  19. Here are new diags while performing parity check. I took these diags as it slowed down to 2.7MB/S unnas-diagnostics-20190610-1813.zip
  20. Yes I tried running it multiple times and the outcome is the same
  21. Hi , I recently had a parity check which took a lot longer than usual. I ran a disk benchmark and found out that the drives for parity and disk4 are really slow but SMART reports seem to show that it's fine. Not sure what might be the cause of the slow parity check. It took almost double the amount of time as it normally does. Please see the attached diagnostics. unnas-diagnostics-20190530-1805.zip
  22. Please see attached HGST_HDN728080ALE604_R6GS94LY-20190530-1805 disk4 (sde).txt WDC_WD100EMAZ-00WJTA0_JEGW7S5N-20190530-1805 parity (sdi).txt
  23. I recently ran a benchmark after seeing my parity check take a lot longer than normal. All my drives have been precleared and are less than 1 years old. My parity especially is less than 2 months old and it's shucked so I can't simply RMA it. Is there something I can do? I'm not sure why the speeds suddenly drop so much.
  24. Traceback (most recent call last): File "/tmp/user.scripts/tmpScripts/rtorrent Cleanup/script", line 18, in torrent_tracker_status = rtorrent.d.get_message(torrent) File "/usr/lib64/python2.7/xmlrpclib.py", line 1243, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1602, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1283, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1316, in single_request return self.parse_response(response) File "/usr/lib64/python2.7/xmlrpclib.py", line 1493, in parse_response return u.close() File "/usr/lib64/python2.7/xmlrpclib.py", line 800, in close raise Fault(**self._stack[0]) xmlrpclib.Fault: This is the error I'm getting. I think specifically "torrent_tracker_status = rtorrent.d.get_message(torrent)"