Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About bobokun

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I understand and that's why I've added them on my exclude folders (as displayed on the screenshot) but it's still being monitored? I must be configuring something wrong but I'm not sure how I should fix it.
  2. I've been trying to set up file integrity to get it to work but I'm not sure if I'm doing this correctly or not. I keep on getting BLAKE2 key mismatch on my libvirt.img and a lot of my nextcloud files: See example: BLAKE2 hash key mismatch (updated), /mnt/disk1/nextcloud/.htaccess was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/nextcloud/appdata_ocdoy1vwt49l/appstore/apps.json was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/nextcloud/appdata_ocdoy1vwt49l/appstore/categories.json was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/nextcloud/appdata_ocdoy1vwt49l/appstore/future-apps.json was modified BLAKE2 hash key mismatch (updated), /mnt/disk5/vm backup/libvirt.img was modified Are these true errors?? I have these excluded and clearing it but every time integrity check runs it always finds these files mismatched. Here are my settings:
  3. I've been using unraid for about 2 years now and I love the fact that it makes it so easy to set up docker containers and have everything running! Happy Birthday Unraid!!
  4. Deleting those files/folders worked for me as well. Thanks!
  5. Anyone else get this error when updating the nextcloud to the latest version? 16.0.2
  6. How did you get it to run on python 2.7 in the docker container? As we saw the docker container only contains python 3 and python 2 wasn't installed. Did you install it manually inside the docker container to get it to work? Since we are both running the same container I would assume that it should work for myself if you managed to get it to work
  7. I do have the script saved outside the container, and I also checked the python installed in the docker container is not 2.7 but 3.6.8. I do have the same docker container installed however I think the reason why it's showing 2.7 installed is because I'm running the script from unraid and not within the docker container. In unraid itself I have python 2.7 installed and it's connecting to rutorrent using the server_url variable in the script. It could be a permission issue but I'm not sure how to give permissions access to the script. When I copied the script I pasted it directly in unraid using the userscripts plugin I can edit the script directly. When running the script in a terminal window in unraid (not in the docker container) I am seeing the full error message, whereas previously it was truncated. Traceback (most recent call last): File "./script", line 18, in <module> torrent_name = rtorrent.d.get_name(torrent) File "/usr/lib64/python2.7/xmlrpclib.py", line 1243, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1602, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1283, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1316, in single_request return self.parse_response(response) File "/usr/lib64/python2.7/xmlrpclib.py", line 1493, in parse_response return u.close() File "/usr/lib64/python2.7/xmlrpclib.py", line 800, in close raise Fault(**self._stack[0]) xmlrpclib.Fault: <Fault -506: "Method 'd.get_name' not defined">
  8. Oh should I be saving this script inside the docker container? I Have the script saved outside and I was using the userscripts plugin to run it. Maybe that's why I'm experiencing the issue? If you have saved the script in the docker container how did you schedule a cron job inside the docker container to run the script?
  9. I realized what I did wrong, I forgot to rescan the controllers after I swapped the sata cable to connect to my motherboard rather than the SAS2008 controller which is why I was getting those errors when benchmarking.
  10. I ended up connecting more drives into my motherboard and fewer onto the Dell H310 card instead to see if that improves the results. Is this normal behavior for drives or do you still think there is issues with my Disk4/Parity?
  11. Are these all windows tools? Is there a way to run these using docker containers or plugins? Or should I run it in Windows VM. I've been using disk4 since before I installed my dell H310 and even with my dell H310 installed it was running fast, parity checks were fine. It's when I added two additional new drives (10TB x 2) is when the issues started happening. I'm wondering if there is an issue with my H310 controller since it is running 6 drives on it (2 SAS to sata cables, 4 sata connections on one and 2 on the other).
  12. Disk4 isn't a shucked drive. I tried to replace the sata cable for disk4 and parity and connect it directly to the motherboard instead of the DELL H310 controller but now when I run diskSpeed it hangs on disk5 with the message below. Now I'm wondering if there is an issue with my H310 card or if it's my SAS to SATA cables. SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]: Scanning Disk 5 (sdd) at 9 TB (90%) - 0|9999999999|0 (64) Not sure if this helps?
  13. How do I fix the xlmrpclib? Is this something that needs to be downloaded outside the docker? I have nerdtools plugin installed and python-2.7.15-x86_64-3.txz installed. Is there anything else additional I need to install?
  14. Here are new diags while performing parity check. I took these diags as it slowed down to 2.7MB/S unnas-diagnostics-20190610-1813.zip