darkwolf

Members
  • Posts

    19
  • Joined

  • Last visited

About darkwolf

  • Birthday 02/07/1975

Converted

  • Gender
    Male
  • Location
    SE US
  • Personal Text
    Ent. Architect, Oracle DBA, Unix System Admin

darkwolf's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I am also having this problem with modpacks like ATM8, Stoneblock3, and Seaopolis2. Runs fine on my custom dockers and local machine with 17, but current version installed on this docker is 19, which crashlogs all the modpacks I have tried (all forge servers that I remember) based off java minecraft 1.18 and 1.19
  2. That may be fine for the above comment, but is not something viable for long term use. Best option would still be to have a choice to bring up the array without spinning up all the virt and docker stuff. Right now if I want to do file maint I have to disable the docker service and the virt service, then do whatever, then shutdown, renable services, and start again. It's not a big thing, just a nice to have But that does not fix the 2nd issue When you have over 30 dockers starting, manually starting them is not an option to do on the regular
  3. Happened after trying to disable Attribute SMART Monitoring on flash storage that does not report errors right. Fixed by going in and trying to do it again. Seems to randomly do it when changing and applying on one disk, then using the < > arrows to change disks instead of hitting done.
  4. I just took the faulty drives out of the array, rebuilt the new config, and copied over the data. Everything looked good, Once I get my new drives in for double parity and everything is good I will pre-clear the 'faulty' drives and see how that goes. If I have issues with them I will make a new post, thanks for doing your awesome jobs! Very much appreciate everyone who helps out here in the community! Much love!
  5. The drives with read errors are on separate sas cables and different power segments, so I am thinking it is a mix of old drives and funkyness from my old controller. My plan atm is removing the drives with the read errors from the drive pool, making a new config with the remaining drives, then mount -o ro,norecovery the other drives and moving the data over to the array. I have some spare drives to throw in to make more space so that should work out space wise. I know I may have some file corruption, anything of importance, like I said, is backed up already so I may just restore those shares that are important and worry about the rest on a case by case basis. I am still open to feedback though, as that process will take a while and I wont start on it for a few days
  6. So I did it, because most of the data is either backed up somewhere or media data I can re-rip from disc. Looks like I have a few disk errors Suggestions?
  7. (Oh, and then of course adding my parity drive back in (now that it is looking fine) and doing parity checks of course
  8. So I have has some cabling and controller issues, kept on losing parity, so disabled parity (i know I know) until I could get my replacement card in. Re-cabled with new cables, had an issue with md1 (And the physical drive as well), rechecked power connections, same, then changed out whole new power connection for 4 drives and md1 came back fine, but now somewhere in the ups and downs the md4 device began having I/O errors, but the device (currently /dev/sdg1) xfs checks looks fine (-n so it wouldn't change anything). I did the 'shrink array' method of rebuilding the array with new config, keep data, to see if that would 'fix' the error. Still no. Since I do not have a parity right now, I am guessing it is safe to xfs_repair the non-md device (ie /dev/sdg1), and recover what I can, then just make a new config - keep files, and go from there? Output from xfs_repair root@media:~# xfs_repair -vn /dev/sdg1 Phase 1 - find and verify superblock... - block cache size set to 6157048 entries Phase 2 - using internal log - zero log... zero_log: head block 1233498 tail block 1233498 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Apr 8 01:18:04 2021 Phase Start End Duration Phase 1: 04/08 01:17:39 04/08 01:17:39 Phase 2: 04/08 01:17:39 04/08 01:17:40 1 second Phase 3: 04/08 01:17:40 04/08 01:17:56 16 seconds Phase 4: 04/08 01:17:56 04/08 01:17:57 1 second Phase 5: Skipped Phase 6: 04/08 01:17:57 04/08 01:18:04 7 seconds Phase 7: 04/08 01:18:04 04/08 01:18:04 Total run time: 25 seconds root@media:~# xfs_repair -vn /dev/md4 Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error media-diagnostics-20210408-0054.zip
  9. So RE A26, any way to fix Sickchill, as it does not give an option to ignore locals. I setup Sonarr again, and I still don't like it
  10. Before the update I was using privoxy for various apps including nzbhydra, jackett, radarr, and now the problem child sickchill All the others are working fine as I already had bypass locals and bypass my grabbers, but now sickchill can not sent nzbs to SABnzbd. Is there any way to tell it to bypass locals (I have not found a way), besides moving everything over to sonarr?
  11. I'd really love the option to do something like 'Start Array in Maintenance Mode', but with the added benefit of mounting drives and services, without starting dockers or VMs that are usually set to autostart. The reason for this is to 1. Do file maintenance without services interfering, and/or 2. Access the Docker or VM tab to change services before they start up Say I wanted to move/rename a directory that gets mounted/used by a docker/VM, very hard to do right now, though doable. Should be a semi-easy addition to add? Thanks for awesome software folks!
  12. DOH!! I thought I had tried that before coming here. (I blame it on the meds!!!) Thank you, worked like a charm, continue the good fight!!!
  13. I tried with entering the IP address, still does not find the shares. I would LOVE to manually add the shares, but do not see how from the "ADD REMOVE SMB/NFS SHARE" It finds my server just fine in the "SEARCH FOR SERVERS", just can not find any shares after trying to authenticate. Can I add it manually via config file?
  14. Hello I tried searching up and down in the various UD threads and can't find an answer for the new interface. I have been having problems mounting an smbv1 mount on an ancient NAS device (probably moving it over to unraid but have had other things to do :P). I have had it mounted before last year, and was backing it up with Duplicati, though it failed later last year and I never realized it Yes I still have force SMBv1 in settings, I can manually mount it so I know the password works. I deleted the mounts as I saw there was a change in UD with encrypting passwords (Applause there ), but I can not seem to get a listing of shares. I get the scan for 'Win 10' shares, enter username and next, password and next, don't enter anything for Domain and then ext, hit Scan for Shares and it's an immediate come back with nothing. I can mount it fine from the command line, but want to use UD since you guys do an awesome job with it (And having supported a few free apps in the past, I feel your pain! You need more love!!!)! Command line mount -t cifs -o rw,nounix,iocharset=utf8,_netdev,file_mode=0777,dir_mode=0777,sec=ntlm,vers=1.0,username='****',password='****' '//STORAGE1/C' '/mnt/disks/STORAGE1_C/' I don't get an error in dmesg or syslog when trying to use UD, but was hoping for a quick fix like 'Oh Stupid, click on this first'. If not I can run diagnostics and dump them if needed Thank you again for the wonderful community here and awesome support. Sorry if I missed the easy fix in past posts, but Soooo many posts to look through (I only made it through like 20 or so pages from the past few months on a couple of different threads' EDIT: OH! And also yes, NetBIOS is also enabled
  15. For the last week or so I am getting drops in sab. Looking at the logs I see I am erroring out on the api curl check for ip. Anything I can do about this? I am using PIA. Worked for a charm for a long while, not sure when it started but I started noticing it a week or two ago. Stuff will eventually download, mostly. It just takes forever I changed to a CA node thinking maybe a TUN would be better, seems to have fixed it, though I was getting faster speeds on US servers, Ahhhh well. Just FYI for anyone seeing spikes and slows in their servers on PIA BTW Great job guys! Love it