Jump to content

Sean M.

Members
  • Content Count

    52
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Sean M.

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Philadelphia, PA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It's been a while since I had to set it up again or tweak it but I use a lot of SpaceInvaderOne's video guides. Here's his one on setting up OpenVPN
  2. Updated Plugin, ran a pre-clear sucessfully, logs attached for reference. ############################################################################################################################ # # # unRAID Server Preclear of disk S246J9FB804046 # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read verification: [2:22:13 @ 117 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [2:21:46 @ 117 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read verification: [2:22:57 @ 116 MB/s] SUCCESS # # # # # # # # # # # # # # # ############################################################################################################################ # Cycle elapsed time: 7:07:04 | Total elapsed time: 7:07:06 # ############################################################################################################################ ############################################################################################################################ # # # S.M.A.R.T. Status default # # # # # # ATTRIBUTE INITIAL CYCLE 1 STATUS # # 5-Reallocated_Sector_Ct 0 0 - # # 9-Power_On_Hours 17347 17354 Up 7 # # 194-Temperature_Celsius 28 42 Up 14 # # 196-Reallocated_Event_Count 0 0 - # # 197-Current_Pending_Sector 0 0 - # # 198-Offline_Uncorrectable 0 0 - # # 199-UDMA_CRC_Error_Count 0 0 - # # # # # # # # # # # ############################################################################################################################ # SMART overall-health self-assessment test result: PASSED # ############################################################################################################################ --> ATTENTION: Please take a look into the SMART report above for drive health issues. --> RESULT: Preclear Finished Successfully!. TOWER-preclear.disk-20180507-0010.zip tower-diagnostics-20180507-0016.zip
  3. Enabled smart, ran a short test, downloaded (attached). I saw gfjardim mentioned there was an issue with the plugin so I'll probably just wait and re-try once that's updated as I'm in no rush to move this drive over the array. Smart-20180429-1747.zip
  4. Not 100% sure the disk is okay, part of the reason for the pre-clear. It came from an old desktop that had been decommissioned for a bit, last I remember it was working fine though. Full copy of diagnostics and pre-clear logs attached. The drive in question is: ST33000651AS_9XK0AKZH Preclear.disk-20180429-0756.zip Diagnostics-20180429-0756.zip
  5. Were you using the pre-clear plugin? If so which version and which script and operation.
  6. Right so since I'm just re-using my own drive, I skipped the erase. It aborted in the post-read verification Plugin Version: 018.04.24 Script: gfjardim - 0.9.5-beta Operation: Clear Preclear_Disk_Log: Apr 26 16:24:38 preclear_disk_9XK0AKZH_29218: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --notify 4 --frequency 4 --cycles 1 --skip-preread --no-prompt /dev/sdb Apr 26 16:24:38 preclear_disk_9XK0AKZH_29218: Preclear Disk Version: 0.9.5-beta Apr 26 16:24:38 preclear_disk_9XK0AKZH_29218: S.M.A.R.T. info type: default Apr 26 16:24:39 preclear_disk_9XK0AKZH_29218: Zeroing: dd if=/dev/zero of=/dev/sdb bs=2097152 seek=2097152 count=3000590884864 conv=notrunc iflag=count_bytes,nocache oflag=seek_bytes 2>/tmp/.preclear/sdb/dd_output Apr 26 16:24:39 preclear_disk_9XK0AKZH_29218: Zeroing: dd pid [30378] Apr 26 17:39:55 preclear_disk_9XK0AKZH_29218: smartctl exec_time: 1s Apr 26 23:30:13 preclear_disk_9XK0AKZH_29218: smartctl exec_time: 8s Apr 26 23:56:51 preclear_disk_9XK0AKZH_29218: smartctl exec_time: 7s Apr 26 23:58:59 preclear_disk_9XK0AKZH_29218: Zeroing: dd - wrote 3000560017408 of 3000592982016. Apr 26 23:59:00 preclear_disk_9XK0AKZH_29218: Zeroing: dd exit code - 0 Apr 26 23:59:01 preclear_disk_9XK0AKZH_29218: Post-Read: verifying the beggining of the disk. Apr 26 23:59:01 preclear_disk_9XK0AKZH_29218: Post-Read: dd if=/dev/sdb bs=512 count=4095 skip=1 conv=notrunc iflag=direct 2>/tmp/.preclear/sdb/dd_output | cmp - /dev/zero &>/tmp/.preclear/sdb/cmp_out Apr 26 23:59:01 preclear_disk_9XK0AKZH_29218: Post-Read: dd pid [1437] Apr 26 23:59:01 preclear_disk_9XK0AKZH_29218: Post-Read: verifying the rest of the disk. Apr 26 23:59:01 preclear_disk_9XK0AKZH_29218: Post-Read: dd if=/dev/sdb bs=2097152 skip=2097152 count=3000590884864 conv=notrunc iflag=nocache,sync,count_bytes,skip_bytes 2>/tmp/.preclear/sdb/dd_output | cmp - /dev/zero &>/tmp/.preclear/sdb/cmp_out Apr 26 23:59:01 preclear_disk_9XK0AKZH_29218: Post-Read: dd pid [1459] Apr 27 07:08:15 preclear_disk_9XK0AKZH_29218: Post-Read: dd - read 3000523292672 of 3000592982016. Apr 27 07:08:15 preclear_disk_9XK0AKZH_29218: Post-Read: dd command failed, exit code [141]. Apr 27 07:08:15 preclear_disk_9XK0AKZH_29218: Post-Read: dd output -> 434+0 records in
  7. Just recently re-installed the plugin. Plugin Version: 018.04.24 Script: gfjardim - 0.9.5-beta Operation: Erase and Clear the disk Should I just skip the erase part and clear it? In the past I've only used new drives and thus just cleared but this is being re-purposed from an old desktop.
  8. Is this the right place to post the results and ask questions on the clear itself? I remember there used to be a thread for that before this plugin so wasn't sure. Was clearing an older drive using "erase and clear" option, got the following error which stopped the process. Apr 25 17:11:46 preclear_disk_9XK0AKZH_24453: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --erase-clear --notify 4 --frequency 4 --cycles 1 --no-prompt /dev/sdb Apr 25 17:11:46 preclear_disk_9XK0AKZH_24453: Preclear Disk Version: 0.9.5-beta Apr 25 17:11:46 preclear_disk_9XK0AKZH_24453: S.M.A.R.T. info type: default Apr 25 17:11:47 preclear_disk_9XK0AKZH_24453: Pre-Read: dd if=/dev/sdb of=/dev/null bs=2097152 skip=2097152 count=3000590884864 conv=notrunc iflag=nocache,sync,count_bytes,skip_bytes Apr 25 17:11:47 preclear_disk_9XK0AKZH_24453: Pre-Read: dd pid [25616] Apr 25 18:34:22 preclear_disk_9XK0AKZH_24453: dd[25616]: pausing (sync command issued) Apr 25 18:34:54 preclear_disk_9XK0AKZH_24453: dd[25616]: resumed Apr 26 00:21:41 preclear_disk_9XK0AKZH_24453: Pre-Read: dd - read 3000592982016 of 3000592982016. Apr 26 00:21:41 preclear_disk_9XK0AKZH_24453: Pre-Read: dd exit code - 0 Apr 26 00:21:41 preclear_disk_9XK0AKZH_24453: Erasing: openssl enc -aes-256-ctr -pass pass:'oKM2mDN6kqiPry0mztZP3xtE9uQYyMy7pgsjCZEjsAVkcHSJZwPBXjXi9fn26Z7a03BkSRqAsebJYeIRyzKppVg+XEyWK+U/WNi+D0riDiF+Zq36MXHL2v3K2W1WiFUpAy7hNGXNq5aWQbClWKltpTCXXLHEMVHpndMRSCyKdTk=' -nosalt < /dev/zero 2>/dev/null | dd of=/dev/sdb bs=2097152 seek=2097152 count=3000590884864 conv=notrunc iflag=count_bytes,nocache oflag=seek_bytes iflag=fullblock 2>/tmp/.preclear/sdb/dd_output Apr 26 00:21:41 preclear_disk_9XK0AKZH_24453: Erasing: dd pid [13610] Apr 26 07:23:55 preclear_disk_9XK0AKZH_24453: smartctl exec_time: 7s Apr 26 07:30:11 preclear_disk_9XK0AKZH_24453: smartctl exec_time: 7s Apr 26 07:39:02 preclear_disk_9XK0AKZH_24453: smartctl exec_time: 1s Apr 26 07:39:09 preclear_disk_9XK0AKZH_24453: smartctl exec_time: 8s Apr 26 07:55:47 preclear_disk_9XK0AKZH_24453: dd process hung at 3000577818624, killing.... Apr 26 07:55:47 preclear_disk_9XK0AKZH_24453: Continuing disk write on byte 3000575721472 Apr 26 07:56:01 preclear_disk_9XK0AKZH_24453: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 446: 13609 Exit 1 openssl enc -aes-256-ctr -pass pass:'oKM2mDN6kqiPry0mztZP3xtE9uQYyMy7pgsjCZEjsAVkcHSJZwPBXjXi9fn26Z7a03BkSRqAsebJYeIRyzKppVg+XEyWK+U/WNi+D0riDiF+Zq36MXHL2v3K2W1WiFUpAy7hNGXNq5aWQbClWKltpTCXXLHEMVHpndMRSCyKdTk=' -nosalt < /dev/zero 2> /dev/null Apr 26 07:56:01 preclear_disk_9XK0AKZH_24453: 13610 Killed | dd of=/dev/sdb bs=2097152 seek=2097152 count=3000590884864 conv=notrunc iflag=count_bytes,nocache oflag=seek_bytes iflag=fullblock 2> /tmp/.preclear/sdb/dd_output Apr 26 07:56:01 preclear_disk_9XK0AKZH_24453: Erasing: openssl enc -aes-256-ctr -pass pass:'vhk+SaVdyeTevJEBVabhQeoj4g5qT1XuobgHBZJspcEN+iY84H6/Hl1ZBwJgrIaCCWziEn9HsE8iWI2/8dQzfWBbeY0Dsxl/KD6Yvslw/WxJOhXKnMZDAKT4oqH52+tX7WL3pQgtahw8YH8XhdYjYbNof45P5WTlroTf6LKP1g0=' -nosalt < /dev/zero 2>/dev/null | dd of=/dev/sdb bs=2097152 seek=3000575721472 count=17260544 conv=notrunc iflag=count_bytes,nocache oflag=seek_bytes iflag=fullblock 2>/tmp/.preclear/sdb/dd_output Apr 26 07:56:01 preclear_disk_9XK0AKZH_24453: Erasing: dd pid [16010] Apr 26 07:56:02 preclear_disk_9XK0AKZH_24453: Erasing: dd command failed -> 8+1 records in 8+1 records out 17260544 bytes (17 MB, 16 MiB) copied, 0.0164792 s, 1.0 GB/s
  9. Was setting up NZBHydra this morning, configured the downloaders and indexers, restarted and am getting a "fatal error occurred" a little googling pointed me to that the port was already in use / another instance of hydra was already running but I don't see that being the case. Any tips? Thanks! 2018-04-08 12:32:59,218 - INFO - nzbhydra - Dummy-3 - Exit registered. Stopping and closing database 2018-04-08 12:32:59,219 - INFO - nzbhydra - Dummy-3 - Database shut down 2018-04-08 12:33:00,179 - NOTICE - nzbhydra - MainThread - Starting NZBHydra 0.2.233 2018-04-08 12:33:00,179 - NOTICE - nzbhydra - MainThread - Base path is /app/hydra 2018-04-08 12:33:00,179 - NOTICE - nzbhydra - MainThread - Loading settings from /config/hydra/settings.cfg 2018-04-08 12:33:00,184 - INFO - log - MainThread - Logging to file /config/hydra/nzbhydra.log as defined in the command line 2018-04-08 12:33:00,185 - INFO - log - MainThread - Setting umask of log file /config/hydra/nzbhydra.log to 0640 2018-04-08 12:33:00,185 - INFO - nzbhydra - MainThread - Started 2018-04-08 12:33:00,185 - INFO - nzbhydra - MainThread - Loading database file /config/hydra/nzbhydra.db 2018-04-08 12:33:00,189 - INFO - nzbhydra - MainThread - Starting db 2018-04-08 12:33:00,190 - INFO - indexers - MainThread - Activated indexer NZBCat 2018-04-08 12:33:00,192 - INFO - indexers - MainThread - Activated indexer nzb.su 2018-04-08 12:33:00,194 - INFO - indexers - MainThread - Activated indexer NZBGeek 2018-04-08 12:33:00,196 - INFO - indexers - MainThread - Activated indexer Drunken Slug 2018-04-08 12:33:00,197 - INFO - indexers - MainThread - Activated indexer NZB Finder 2018-04-08 12:33:00,198 - NOTICE - nzbhydra - MainThread - Starting web app on 192.168.1.154:5075 2018-04-08 12:33:00,199 - NOTICE - nzbhydra - MainThread - Go to http://192.168.1.154:5075 for the frontend 2018-04-08 12:33:00,199 - INFO - web - MainThread - Running threaded server 2018-04-08 12:33:00,202 - ERROR - nzbhydra - MainThread - Fatal error occurred Traceback (most recent call last): File "/app/hydra/nzbhydra.py", line 214, in run web.run(host, port, basepath) File "/app/hydra/nzbhydra/web.py", line 1730, in run app.run(host=host, port=port, debug=config.settings.main.debug, threaded=config.settings.main.runThreaded, use_reloader=config.settings.main.flaskReloader) File "/app/hydra/libs/flask/app.py", line 772, in run run_simple(host, port, self, **options) File "/app/hydra/libs/werkzeug/serving.py", line 625, in run_simple inner() File "/app/hydra/libs/werkzeug/serving.py", line 603, in inner passthrough_errors, ssl_context).serve_forever() File "/app/hydra/libs/werkzeug/serving.py", line 506, in make_server passthrough_errors, ssl_context) File "/app/hydra/libs/werkzeug/serving.py", line 440, in __init__ HTTPServer.__init__(self, (host, int(port)), handler) File "/app/hydra/libs/SocketServer.py", line 420, in __init__ self.server_bind() File "/app/hydra/libs/BaseHTTPServer.py", line 108, in server_bind SocketServer.TCPServer.server_bind(self) File "/app/hydra/libs/SocketServer.py", line 434, in server_bind self.socket.bind(self.server_address) File "/app/hydra/libs/socket.py", line 228, in meth return getattr(self._sock,name)(*args) error: [Errno 99] Address not available
  10. Thanks, I used the unassigned devices to pull the data off as trurl mentioned above. Loaded it back mounted as cache and went to format through the UI, got the following error. Jan 14 14:01:57 Tower emhttpd: req (29): startState=STARTED&file=&cmdFormat=Format&unmountable_mask=1073741824&confirmFormat=OFF&optionCorrect=correct&csrf_token=**************** Jan 14 14:01:59 Tower emhttpd: shcmd (1127): /sbin/wipefs -a /dev/sdd1 Jan 14 14:01:59 Tower emhttpd: shcmd (1128): mkdir -p /mnt/cache Jan 14 14:01:59 Tower emhttpd: shcmd (1129): mount -t ext4 -o noatime,nodiratime /dev/sdd1 /mnt/cache Jan 14 14:01:59 Tower root: mount: /mnt/cache: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error. Jan 14 14:01:59 Tower emhttpd: shcmd (1129): exit status: 32 Jan 14 14:01:59 Tower emhttpd: /mnt/cache mount error: No file system Jan 14 14:01:59 Tower emhttpd: shcmd (1130): umount /mnt/cache Jan 14 14:01:59 Tower kernel: EXT4-fs (sdd1): VFS: Can't find ext4 filesystem Jan 14 14:01:59 Tower root: umount: /mnt/cache: not mounted. Jan 14 14:01:59 Tower emhttpd: shcmd (1130): exit status: 32 Jan 14 14:01:59 Tower emhttpd: shcmd (1131): rmdir /mnt/cache Jan 14 14:01:59 Tower emhttpd: Starting services... Jan 14 14:01:59 Tower emhttpd: no mountpoint along path: /mnt/cache Should I just use the terminal instead of the UI or will it not make a difference? Thanks again!
  11. Hello, just upgraded to 6.4, rebooted just fine but now it's telling me ext4 for my SSD cache is unsupported and is prompting me to format. I've never had an issue with this previously, so I guess my first question is, is there any way to get it back to allowing the cache SSD as ext4? If not, what would be recommended process so I don't lose everything current stored on the cache? If it was mountable, I could move everything off, format, then move it back on but given I can't get to that point, looking for some tips. Appreciate the assistance in advance!
  12. This came up for me the other day, here's a post with all the various ways. I used - http://<Insert IP Address>/log/syslog Replace the <Insert IP Address> with your respective server. You can use Tower if that works for you in general.
  13. Realize this is an old topic, but wanted to get some advice. I used cAdvisor to find the largest offenders but am not sure the best way to clean it up, I could certainly do a full re-load, wanted to ask before I went that route. Thanks!
  14. Tried setting up PlexWatch but can't see to get it to start. There were a few posts back earlier this year about the same issue but didn't seem to end up resolved. Any suggestions? Or has anyone else released an updated PlexWatch? Or if PlexWatch is even still active / the best option? Thanks in advance! Edit: Seems that PlexPy can do the same 'new content newsletter' feature which is what I was looking for.
  15. I've seen this error before. Basically they DeleteCleanupDisk & ParCleanupQueue are defunct options. It was a problem with a version change of NZBget iirc. If you edit the nzbget.conf file in /config with a suitable texteditor (Notepad++ on Windows I'd recommend) then search for those terms and delete both corresponding lines I think that fixes it. Make a backup first though. Appreciate the tip! Just wanted to follow up in case anyone else ran across the same issue. I did go in and delete those two lines however was still running into issues so I just re-installed the docker completely, seems to be running smoother now than ever.