Jump to content

helpermonkey

Members
  • Content Count

    94
  • Joined

Community Reputation

0 Neutral

About helpermonkey

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. yup - i had netherlands in there - just found out it was bunk and fixed it by moving to toronto 🙂 woot.
  2. So a few months back my delugevpn stopped working ... i didn't decide to fix it until now and looking at the log file i see this: Created by... ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2019-06-13 00:23:18.241690 [info] System information Linux 072d4aa08eae 4.19.41-Unraid #1 SMP Wed May 8 14:23:25 PDT 2019 x86_64 GNU/Linux 2019-06-13 00:23:18.278918 [info] PUID defined as '99' 2019-06-13 00:23:18.568110 [info] PGID defined as '100' 2019-06-13 00:23:18.873367 [info] UMASK defined as '000' 2019-06-13 00:23:18.892447 [info] Setting permissions recursively on volume mappings... 2019-06-13 00:23:19.114592 [info] DELUGE_DAEMON_LOG_LEVEL defined as 'info' 2019-06-13 00:23:19.133916 [info] DELUGE_WEB_LOG_LEVEL defined as 'info' 2019-06-13 00:23:19.153238 [info] VPN_ENABLED defined as 'yes' 2019-06-13 00:23:19.193262 [crit] No OpenVPN config file located in /config/openvpn/ (ovpn extension), please download from your VPN provider and then restart this container, exiting... Created by... ___. .__ .__ In case the problem is with OpenVPN ... my log file basically just says this: ./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory i cannot launch the webui for my vpn either .... so if it's a problem with that - i will ask in the appropriate place.
  3. You rock! Thank you so much for all your help. this is why i love unraid - the software is cool - the people are fantastic.
  4. okay here is the report from xfs repair: root@Buddha:~# xfs_repair -v /dev/md5 Phase 1 - find and verify superblock... - block cache size set to 737008 entries Phase 2 - using internal log - zero log... zero_log: head block 505816 tail block 505816 - scan filesystem freespace and inode maps... sb_ifree 1411, counted 1417 sb_fdblocks 8636304, counted 9131157 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 data fork in ino 156931303 claims free block 19618443 data fork in ino 156931303 claims free block 19618444 imap claims in-use inode 156931303 is free, correcting imap data fork in ino 159222676 claims free block 19903014 attr fork in ino 159222676 claims free block 19906546 imap claims in-use inode 159222676 is free, correcting imap data fork in ino 159222697 claims free block 19905936 data fork in ino 159222697 claims free block 19905937 imap claims in-use inode 159222697 is free, correcting imap imap claims in-use inode 159222699 is free, correcting imap data fork in ino 159222706 claims free block 19906096 data fork in ino 159222706 claims free block 19906097 imap claims in-use inode 159222706 is free, correcting imap imap claims in-use inode 159222708 is free, correcting imap - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 3 - agno = 0 - agno = 1 - agno = 2 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:505836) is ahead of log (1:505816). Format log to cycle 4. XFS_REPAIR Summary Wed Jun 12 10:58:04 2019 Phase Start End Duration Phase 1: 06/12 10:56:10 06/12 10:56:14 4 seconds Phase 2: 06/12 10:56:14 06/12 10:56:14 Phase 3: 06/12 10:56:14 06/12 10:56:16 2 seconds Phase 4: 06/12 10:56:16 06/12 10:56:16 Phase 5: 06/12 10:56:16 06/12 10:56:16 Phase 6: 06/12 10:56:16 06/12 10:56:17 1 second Phase 7: 06/12 10:56:17 06/12 10:56:17 Total run time: 7 seconds done root@Buddha:~# so it's flipping back!!! you rock!! so two outstanding issues that i have questions about: 1) my plex docker seems to have disappeared... however, my settings and directory and such is still on my drive so can i just "reinstall it"? or should i wipe those and start with a fresh install? 2) is there any reason to (or effective way to) move some of the data from my almost filled up drives to either of my drives with a good chunk of space?
  5. it's not 100% out of the question that i put back in the wrong 2 TB drive BTW 🙂
  6. and disk 5 didn't mount - it's connected via the card I just replaced.... not sure if that would make a difference. Attached are the diagnostics. buddha-diagnostics-20190611-1842.zip
  7. Cool! does it look like i'm good to go to start this thing back up again then? i'm assuming it won't lose any of my data after all the prep but just want to make sure.
  8. okay guys - thanks so much for your help up until this point. Soooo ... i think i'm ready to fireup the server ... here's the status page... also - fyi - i've started getting more of those UDMA errors that I thought was cable/card connected ... johnnie helped me get that equipment swapped out - however, now i'm getting errors on ones that are connected directly to the mobo AND when they were in other locations, were not showing errors (nor were the drives in the existing slots showing errors). Is it possible this is just a symptom of all the changes going on? Is there a way to double check or test these things otherwise? Disk 1: 199 UDMA CRC error count 0x000a 200 200 000 Old age Always Never 3522 this is the new 8TB data drive. When i first inserted this drive, i was doing a preclear on both the parity drive and this drive at the same time. However, I realized (after reading) that these were problematic so i stopped the preclear on Drive 1 and then ran it alone and from the begining after that.... the errrors have always showed up. Disk 2: 199 UDMA CRC error count 0x0032 200 001 000 Old age Always Never 331 Disk 3: 199 UDMA CRC error count 0x0032 200 196 000 Old age Always Never 2567 Disk 5: 199 UDMA CRC error count 0x003e 200 199 000 Old age Always Never 8 (this is the one plugged into the new card with the new cables). So a) can i fire this up again and start the parity sync-data rebuild? b) what do i do about these errrors?
  9. sweet - that did the trick. Okay - now i've gotta wait on my new sata card and then when I get everything else confirmed i'll post here before i restart the array just to make sure i've fixed all the problems. One other quick question - would it be advisable at some point to move the data on disk 3 (which is almost full) to disk 1 now that it is 8TB? If so - how would i go about doing that? Since I'd want to keep disk 3 in the array but just give it more space.
  10. perfect - thanks.... so just: "rsync -av /mnt/disks/ST2000VN000-1H3164_W1H2RQHS/ /mnt/disk1/"
  11. here ya go.... root@Buddha:~# rsync -ah --stats --max-delete=100 --delete-before --force -n /mnt/disks/ST2000VN000-1H3164_W1H2RQHS/ /mnt/disk1/ Number of files: 55,462 (reg: 51,455, dir: 4,007) Number of created files: 123 (reg: 97, dir: 26) Number of deleted files: 20 (reg: 9, dir: 11) Number of regular files transferred: 98 Total file size: 1.96T bytes Total transferred file size: 94.41G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 1.57M File list generation time: 0.091 seconds File list transfer time: 0.000 seconds Total bytes sent: 1.62M Total bytes received: 1.90K sent 1.62M bytes received 1.90K bytes 1.08M bytes/sec total size is 1.96T speedup is 1,210,386.29 (DRY RUN)
  12. ha - that would make a difference.... here is the end of the list.....