KoNeko

Members
  • Posts

    151
  • Joined

Everything posted by KoNeko

  1. I have syslog enabled. i did reenable the disk again. and its done its party check and rebuild thing and all is fine again now. i added the diagnostics. i did get a message the disk got disabled. but only after the restart. thanekos-diagnostics-20200625-0450.zip
  2. i have a disk that is pushed off the array and the write count said 18,446,744,073,709,545,472 i didnt get any error /notice on it. i saw it by chance when i logged in. i did a restart of the server and than i got a notice. Unraid Disk 5 error Alert [THANEKOS] - Disk 5 in error state (disk dsbl) 1593026701 WDC_WD30EFRX-68AX9N0_WD-WMC1T0944207 (sdg) alert Jun 24 21:23:37 ThaNekos kernel: sd 7:0:1:0: [sdg] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB) Jun 24 21:23:37 ThaNekos kernel: sd 7:0:1:0: [sdg] 4096-byte physical blocks Jun 24 21:23:37 ThaNekos kernel: sd 7:0:1:0: [sdg] Write Protect is off Jun 24 21:23:37 ThaNekos kernel: sd 7:0:1:0: [sdg] Mode Sense: 9b 00 10 08 Jun 24 21:23:37 ThaNekos kernel: sd 7:0:1:0: [sdg] Write cache: enabled, read cache: enabled, supports DPO and FUA Jun 24 21:23:37 ThaNekos kernel: sdg: sdg1 Jun 24 21:23:37 ThaNekos kernel: sd 7:0:1:0: [sdg] Attached SCSI disk Jun 24 21:24:24 ThaNekos emhttpd: WDC_WD30EFRX-68AX9N0_WD-WMC1T0944207 (sdg) 512 5860533168 Jun 24 21:24:24 ThaNekos kernel: mdcmd (6): import 5 sdg 64 2930266532 0 WDC_WD30EFRX-68AX9N0_WD-WMC1T0944207 Jun 24 21:24:24 ThaNekos kernel: md: import disk5: (sdg) WDC_WD30EFRX-68AX9N0_WD-WMC1T0944207 size: 2930266532 i can still check the Smart values of the disk in unraid but those are all good. how is it possible it will do so many writes? i had it do a health check monday and it was all ok. it says drive is disabled. but no warnings in the logs further.
  3. i had a vmdk from my qnap what i did to get it working is Thx to lots of googling. open a console on unraid and type qemu-img convert -p -f vmdk -O raw /mnt/user/<the location of your vmdk file> /mnt/user/<the location of your new file>.img than make a VM and link to that img file and start it. Worked for me
  4. Problem was medua could not connect to my Rutorrent. I was usintg Medusa on BR01 custom IP. and it worked i changed it to host. So i can beable to access via wireguard VPN but that also seems to break the connection to rutorrent. i changed it back to a static IP br01 and it works again. it still says SNATCHQUEUE-SNATCH-371040 :: [] rTorrent: Unable to send Torrent but it does seems to send it anyway.
  5. there was a update on the docker and now on every download 2020-06-19 22:38:03 ERROR SNATCHQUEUE-SNATCH-371040 :: [] Snatch failed! For result: [HorribleSubs].Nami.yo.Kiitekure.-.12.[720p].mkv Traceback (most recent call last): File "/app/medusa/medusa/search/queue.py", line 459, in run self.success = snatch_episode(result) File "/app/medusa/medusa/search/core.py", line 167, in snatch_episode result_downloaded = client.send_torrent(result) File "/app/medusa/medusa/clients/torrent/generic.py", line 238, in send_torrent if not self._get_auth(): File "/app/medusa/medusa/clients/torrent/rtorrent.py", line 55, in _get_auth self.auth = RTorrent(self.host, None, None, True) File "/app/medusa/lib/rtorrent/__init__.py", line 87, in __init__ self._verify_conn() File "/app/medusa/lib/rtorrent/__init__.py", line 126, in _verify_conn assert 'system.client_version' in self._get_rpc_methods( File "/app/medusa/lib/rtorrent/__init__.py", line 164, in _get_rpc_methods return(self._rpc_methods or self._update_rpc_methods()) File "/app/medusa/lib/rtorrent/__init__.py", line 154, in _update_rpc_methods self._rpc_methods = self._get_conn().system.listMethods() File "/usr/lib/python3.8/xmlrpc/client.py", line 1109, in __call__ return self.__send(self.__name, args) File "/usr/lib/python3.8/xmlrpc/client.py", line 1450, in __request response = self.__transport.request( File "/usr/lib/python3.8/xmlrpc/client.py", line 1153, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib/python3.8/xmlrpc/client.py", line 1165, in single_request http_conn = self.send_request(host, handler, request_body, verbose) File "/usr/lib/python3.8/xmlrpc/client.py", line 1278, in send_request self.send_content(connection, request_body) File "/usr/lib/python3.8/xmlrpc/client.py", line 1308, in send_content connection.endheaders(request_body) File "/usr/lib/python3.8/http/client.py", line 1235, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.8/http/client.py", line 1006, in _send_output self.send(msg) File "/usr/lib/python3.8/http/client.py", line 946, in send self.connect() File "/usr/lib/python3.8/http/client.py", line 1402, in connect super().connect() File "/usr/lib/python3.8/http/client.py", line 917, in connect self.sock = self._create_connection( File "/usr/lib/python3.8/socket.py", line 808, in create_connection raise err File "/usr/lib/python3.8/socket.py", line 796, in create_connection sock.connect(sa) OSError: [Errno 113] Host is unreachable
  6. Yes i know i tried it with it enabled to get it working. But i could not access the internet nor the network. So its disabled for now till i can figure something out.
  7. i always use IP address to access anything on my network not the DNS name. so the router is 192.168.X.X the other nas is also an it the unraid server has an IP. Everything on my network is also visitable via the hostname i made like router.home and unraid.home etc. But everything running on unraid/docker (think vms too didnt try that yet) with its custom ip's on the dockers i cant access via wireguard. when i start openVPN i can access everything on my network even the custom ip's on my unraid docker. ----- Im using Remote Tunneled access. Did some reading and saw i had to put " Host access to custom networks on Enabled" i did that i had to add a static route on my router (pfsense) Remark: docker containers on custom networks need static routing 10.253.0.0/24 to 192.168.1.60 I did that. (Its currently disabled because it isnt working.) i had to add the gateway also on my pfsense which i did or i could not make this static route. also had to put local server uses NAT on wireguard to NO. which i also did. deactivated wireguard and enabled it again. Tried it on my phone on 4G. No internet at all and no network access. To get it working again i had to redo everything to last state to get it normally working again. Im at a lost on how to fix it.
  8. The last problem it seems to be fixed after i did a restart of the Docker. Everything was slow i could not save anything everything got a time out. After a restart it worked again.. An other question. is it possible to use AniDB.net as indexer ? if so HOW and how can i change the current ones to that one?
  9. i added a 2 series. 1 it did download the files and the other it does not. So its doing a backlog search. it says it cant find anything but when i do my own search on the site Nyaa i get enough results. i added a wanted group also https://nyaa.si/?f=0&c=1_2&q=[horrible]+Gleipnir
  10. I am getting this error 2020-06-14 14:58:51 ERROR Thread_0 :: [] Exception: Not sending, banned Traceback (most recent call last): File "/app/medusa/ext/adba/aniDBAbstracter.py", line 280, in add_to_mylist self.aniDB.mylistadd(size=self.size, ed2k=self.ed2k, state=state, viewed=viewed, source=source, storage=storage, other=other) File "/app/medusa/ext/adba/__init__.py", line 730, in mylistadd return self.handle(MyListAddCommand(lid, fid, size, ed2k, aid, aname, gid, gname, epno, edit, state, viewed, source, storage, other), callback) File "/app/medusa/ext/adba/__init__.py", line 160, in handle self.link.request(command) File "/app/medusa/ext/adba/aniDBlink.py", line 238, in request self._send(command) File "/app/medusa/ext/adba/aniDBlink.py", line 209, in _send raise AniDBError("Not sending, banned") adba.aniDBerrors.AniDBError: Not sending, banned 2020-06-14 14:52:35 ERROR Thread_0 :: [] Exception: Command has timed out Traceback (most recent call last): File "/app/medusa/ext/adba/__init__.py", line 166, in handle command.resp AttributeError: 'MyListAddCommand' object has no attribute 'resp' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/medusa/ext/adba/aniDBAbstracter.py", line 280, in add_to_mylist self.aniDB.mylistadd(size=self.size, ed2k=self.ed2k, state=state, viewed=viewed, source=source, storage=storage, other=other) File "/app/medusa/ext/adba/__init__.py", line 730, in mylistadd return self.handle(MyListAddCommand(lid, fid, size, ed2k, aid, aname, gid, gname, epno, edit, state, viewed, source, storage, other), callback) File "/app/medusa/ext/adba/__init__.py", line 169, in handle raise AniDBCommandTimeoutError("Command has timed out") adba.aniDBerrors.AniDBCommandTimeoutError: Command has timed out
  11. #!/bin/bash #!/usr/bin/php <? exec('/usr/local/emhttp/plugins/dynamix/scripts/notify -e "Antivirus Scan" -s "Antivirus Scan Started" -d "Antivirus Scan Started" -i "normal"'); exec('docker start ClamAV'); for ( ;; ) { $status = trim(exec("docker ps | grep ClamAV")); if ( ! $status ) break; sleep(60); } exec("docker logs ClamAV 2>/dev/null",$logs); foreach ($logs as $line) { $virus = explode(" ",$line); if (trim(end($virus)) == "FOUND" ) { $infected .= "$line\n"; } } if ( ! $infected ) $infected = "No infections found\n"; exec('/usr/local/emhttp/plugins/dynamix/scripts/notify -e "Antivirus Scan" -s "Antivirus Scan Finished" -d '.escapeshellarg($infected).' -i "normal"'); ?> this is what is in the script i see what i did wrong now i removed the top one bin bash and it doesnt give any errors anymore when i run the script
  12. i added the notify user script but im getting this. Script location: /tmp/user.scripts/tmpScripts/clamav/script Note that closing this window will abort the execution of this script /tmp/user.scripts/tmpScripts/clamav/script: line 3: ?: No such file or directory /tmp/user.scripts/tmpScripts/clamav/script: line 4: syntax error near unexpected token `'/usr/local/emhttp/plugins/dynamix/scripts/notify -e "Antivirus Scan" -s "Antivirus Scan Started" -d "Antivirus Scan Started" -i "normal"'' /tmp/user.scripts/tmpScripts/clamav/script: line 4: `exec('/usr/local/emhttp/plugins/dynamix/scripts/notify -e "Antivirus Scan" -s "Antivirus Scan Started" -d "Antivirus Scan Started" -i "normal"');' do i still need to install something else ?
  13. it isnt a big problem like you said its only if you want to start/restart the server. and it isnt a very high prio to fix that either Yes i figured that out that i had to add the passive port in the config. I ran multiple proftp server but never had to do this So was a bit confused. :) i wanted to edit my post that i had it fixed but it was already too late here and went to sleep.
  14. i updated the plugin and now the apps tab is gone and it says plugin: updating: community.applications.plg Cleaning Up Old Versions Setting up cron for background notifications plugin: downloading: https://raw.githubusercontent.com/Squidly271/community.applications/master/archive/community.applications-2020.06.13-x86_64-1.txz ... failed (Invalid URL / Server error response) plugin: wget: https://raw.githubusercontent.com/Squidly271/community.applications/master/archive/community.applications-2020.06.13-x86_64-1.txz download failure (Invalid URL / Server error response)
  15. I use the Dark skin on Unraid. But when i loadup proftpd it has a white background and cant read anything on it. Locally i got it working i can connect to the ftp and all. But from extern i cant connect to it. The way my setup is and it worked and works currently for other things. i use PFsense and made an alias and have a few ip's in there and only those ips can connect to the ftp. i want to move from my Qnap FTP to my Unraid proftpd ftp. everything works except connections from extern.
  16. i have 2 ssd in my unraid that arnt in use atm. I want to use these for VM's and Docker. i have precleared them. But i cant mount them or format them. I can add them to the array and than choose that the VM/docker should be saved there. But SSD's in a array isnt advisable. What is the best way i can tackle this. I know i can install windows on the ssd it self and than pass through it as a VM but i want to install multiply VM's on the ssd.
  17. So what is a better way to move files from a other nas to my Unraid that doesnt have this "problem" i still have some TB's to go. And my Unraid isnt running on Full power yet.
  18. Can't copy anymore to Array . BUT still enough space share where i copy it too i copy it to /mnt/user/share/ the structure of the maps are /share/Complete/[0-9,A-Z] each is 1 directory. in those directories are the serie names. so it should split on the serie name maps. but when i was coping it still put all on 1 hdd which is good till it was full but now i cant copy any file anymore to that share. maybe unless i do it manual to disk X that is empty? but that isnt very handy. Yes some of those directories can be BIG.
  19. i didnt change anything in the template. every docker has its own IP. even if it was that it was a port problem which it didnt say it should not remove the docker at all it should just have stopped it. i had reinstalled it and i had it stopped. SO it isnt running. But when i just checked its removed again.
  20. i installed yesterday the container and it worked. i saw there was a update and it did the update when i was sleeping. In the morning i saw that the docker container was gone. The data directory is still there. When i load the docker again it isnt working. I didnt change anything on the settings since i got it running perfectly. Any idea?
  21. Question. Im moving files between my Nas and unraid. First i did it with SSH/MC but i have to keep a window open on my PC which isnt a big deal but ok. So i tried krusader and than moving files. While it works nicely the speed is lower than when i do SSH/MC. Is there a way to increase the speed in krusader? in SSH/MC krusader/docker
  22. i have a server with 5 hdd's in there 3x 3 tb 2x 12 tb 1 x 12 tb is parity 1 x 12 tb is data 3 x 3 tb is data (The 3 tb will be replaced later) I am coping data from my old nas to the new unraid server via krusader docker. i mounted my old nas and attached it to the docker. in krusader im coping the data to the /mnt/user/[share] while that works perfectly its only copying the data to the 12 tb data disk and not to the other hdd's this is the settings im using for the share. IF the share is the Top level. i have in there 1 directory in that directory i have directories a-z and in there also directories. So if i understand it correctly the Directories in the A_Z dirs will be split over the hdd's ? But it does not seems to do that.