Jump to content

LoyalScotsman

Members
  • Posts

    65
  • Joined

  • Last visited

Everything posted by LoyalScotsman

  1. marked your usb as soltuion but once i changed them over to usb3 it was back to normal speeds
  2. yeah i tried that no joy so stopped the rebuild and juggled the usb caddle around until i get a new raid card, but turns out the slow speed was cause i used a usb 2.0 instead of the 3.0 port now back up to 100mb/s - 150mb/s
  3. is there away to get docker running again without a reboot ?
  4. never had the issue before hand but i was thinking that the new USB HHD holder i got might be causing it to run very slow
  5. Myself again I have acquired 2 8tb disks so was putting them both into parity so I can upgrade down the line but my estimated rebuild speed shown as 19.9 MB/sec "see screenshot" the history of parity checks are normally around 100 MB/s. "see screenshot" Disk specs New parity 1 disk - ST8000NM012A - RPM 7200 Parity 2 - MG08ADA400NY - RPM 7200 Disk 1 - MG08ADA400NY - RPM 7200 Disk 2 - WD40EFRX - RPM 5400 Disk 3 - MG08ADA400NY - RPM 7200 Old party disk going into array - WD40EZRX - RPM 5400 Parity 2 will be another ST8000NM012A - RPM 7200 disk not yet connected So i know it aint the RPM 5400 causing this slow down since not been an issue in the past there is one error shown but i assume that is due to the new disks temp being hot logs/daigs before i stopped docker - vault101-diagnostics-20240312-1033.zip vault101-syslog-20240312-1032.zip Stopped docker cause there was some activity from plex but made no difference to the speeds Tried to start docker again but I has failed to start don't want to reboot cause the rebuild in progress unless someone advised to reboot but can someone assist here please ? EDIT logs after i tried to start docker vault101-syslog-20240312-1055.zipvault101-diagnostics-20240312-1055.zip
  6. thank you i shall do this but another issue occurred so will post this and await response
  7. So i have done the above and errors cleared hopefully it also stops the pop up when logging in that occurs ever hour with regards to COW my appdata share is set to auto shares system and domains is set to no how do i change this cause it appears greyed out I assume this COW option is only for shares directly saved on the cache ?
  8. the script appears to still be picking up errors is there a command to clear this ? Script Starting Mar 08, 2024 19:47.01 Full logs for this script are available at /tmp/user.scripts/tmpScripts/checkcachedrives/log.txt [/dev/sde1].write_io_errs 0 [/dev/sde1].read_io_errs 0 [/dev/sde1].flush_io_errs 0 [/dev/sde1].corruption_errs 0 [/dev/sde1].generation_errs 0 [/dev/sdk1].write_io_errs 2668196 [/dev/sdk1].read_io_errs 3936 [/dev/sdk1].flush_io_errs 0 [/dev/sdk1].corruption_errs 1779689 [/dev/sdk1].generation_errs 2592 Script Finished Mar 08, 2024 19:47.03
  9. there is no NOCOW shares i have replaced the cable for /dev/sdk i will montior just thought put a post but so far it is looking good so as you say that disk must have dropped offline
  10. Was about to add a new 8tb disk then noticed an error pop up when logging in saying cache error not first time i had this so its annoying but i went to have a look at disks and one shown failed but unriad wasn't responding to any changes that i was trying to do so i ended up forcing a reboot and now both cache disks appear to run smart scans and say no issues Logs before the reboot vault101-diagnostics-20240308-1515.zip vault101-syslog-20240308-1514.zip logs after reboot vault101-diagnostics-20240308-1534.zip vault101-syslog-20240308-1534.zip also checked disk logs and see the below cache disk sde cache 2 disk sdk Also i have tried a different sata cable on the disks to see if it helps but to early to tell esscailly with the current errors parity check in process cause i rebooted it to get it to respond this is the scipt results for the cache check Script Starting Mar 08, 2024 15:47.01 Full logs for this script are available at /tmp/user.scripts/tmpScripts/checkcachedrives/log.txt [/dev/sde1].write_io_errs 0 [/dev/sde1].read_io_errs 0 [/dev/sde1].flush_io_errs 0 [/dev/sde1].corruption_errs 0 [/dev/sde1].generation_errs 0 [/dev/sdk1].write_io_errs 2668196 [/dev/sdk1].read_io_errs 3936 [/dev/sdk1].flush_io_errs 0 [/dev/sdk1].corruption_errs 57 [/dev/sdk1].generation_errs 0 Script Finished Mar 08, 2024 15:47.03 So i have ran a scrub and it has gave me the following UUID: 5f5d37ad-1efa-46da-b974-d876dfdb375a Scrub started: Fri Mar 8 16:05:10 2024 Status: finished Duration: 0:43:16 Total to scrub: 142.90GiB Rate: 56.34MiB/s Error summary: verify=2592 csum=1678144 Corrected: 1680736 Uncorrectable: 0 Unverified: 0 is there an indication to why my cache has corrupted ?, i am am happy to move everything off my cache again to see how that goes cause it leaves corrupted stuff behide but looking for input before i do this incase there is a suggestion any help would be great ? thank you is there an indication to why my cache has corrupted ?, i am am happy to move everything off my cache again to see how that goes cause it leaves corrupted stuff behide but looking for input before i do this incase there is a suggestion any help would be great ? thank you EDIT after the scrub it appears the errors have stopped will update a post later on but would still like input please
  11. So the array finished its parity check and after reboot all came back, this is the first i seen this before is it a bug with the 6.12.2 ?
  12. A disk1 dropped offline and came back with disabled disk so followed steps on the forum and powered down server replaced sata cable with a new one, then done steps to clear the disabled error on disk1 but upon bringing array back up and starting the rebuilt process of the disk1 i have noticed the below 1. docker server failed to start and within settings it shows 2. i only have 3 shares where i should have 8 / 9 shares, the 3 shares present appear to have nothing in them and viewing shares over a network they appear empty but checking array it shown the data still present I have a rebuild for disk 1 is in process so cannot reboot at this moment and time this might fix it but opened this help request in case it doesn't and looking for input Also noticed the following is spamming my system logs Daigs logs attached Also noticed when going into backup/restore appdata and going to the restore option i get this but i assume its cause docker aint running I just hope after a rebuild is done and a reboot and it comes to life again cause otherwise not got a clue how to recover this, also this server was running for 3 month without an issues and the disk1 issue happened on the 5th but was only noticed today vault101-diagnostics-20240108-1713.zip
  13. I have not had the time to look further into it so no issue still an issue with virgin and i run through nordvpn but does not want to play ball with virgin. how you getting on with it ?
  14. Hello just thought i would give an update uptime is now 28 days so far seem more stable with the macvlan change, but will contuine to montior it but so far so good
  15. ok i will implement the change from MACvlan to IPvlan to test this fingers crossed this fixes it i have also checked that link and ran the command to clear the errors which didn't help, had to power down and reset the cache disks, so going to look into the cable that's on cache disks just incase
  16. hello sorry to be a pain but i just had the crash where it completely locks out everything wont ping cant access the URL no display when monitor connected. I checked my router and it was shown the statis IP is not connected even though the NIC lights are flashing on the back of the server, so completely crashed from what it looks like i have just forced a shutdown with power button "now doing a parity check" and attached the logs and daigs from this fresh boot up hopefully you can find something to why this keeps happening vault101-diagnostics-20231016-1128.zip vault101-syslog-20231016-1028.zip also my cache disks have an error on them again see below Script Starting Oct 16, 2023 11:50.35 Full logs for this script are available at /tmp/user.scripts/tmpScripts/checkcachedrives/log.txt [/dev/sdf1].write_io_errs 0 [/dev/sdf1].read_io_errs 0 [/dev/sdf1].flush_io_errs 0 [/dev/sdf1].corruption_errs 0 [/dev/sdf1].generation_errs 0 [/dev/sdh1].write_io_errs 16647 [/dev/sdh1].read_io_errs 204 [/dev/sdh1].flush_io_errs 0 [/dev/sdh1].corruption_errs 0 [/dev/sdh1].generation_errs 0 Script Finished Oct 16, 2023 11:50.37 just started a scrub with repair corrupted blocks to see if does anything
  17. ideal this has fixed my cache your a life saver so next question with regards to the changing docker network to IPvlan so its currently on MAClan which you are clearly aware of the reason i ask about this is cause i have never adjusted this and always been maclan but what implication will this make will there be any reconfigurion needed for anything within docker or is it a switch and done ? also noticed i got the pop up for unraid V6.12.4 is it worth upgrading cause it got stuck on unmounting again and i had to run the commands losetup, umount to actually get the array to stop but going by above forum that should been fixed on the version of unraid i have installed
  18. Please see fresh logs after reboot Hope use can help me vault101-syslog-20231015-1939.zip vault101-diagnostics-20231015-2040.zip
  19. I don't this but it wouldn't fix them so moved everything off and formatted cache then copied bakc over So not sure if same issue but issue contuine see below Plexed started giving errors when trying to watch something so I rebooted plex docker to then not work at all, then I shut plex down and started it up again no change my server was down, turned docker off then back on and then said docker failed to start then stopped array for reboot and it got stuck at unmounting disk so found this forum ran commands on it but no luck see screenshot so I rebooted to see if this would bring docker back up but now I have the following on my cache disks they are showing as incorrect format but some dockers are working but not all. I have attached below daigs and syslogs before I rebooted see below will create another comment with fresh daigs and logs after reboot please help me on this one I have not yet formatted the cahce disk awaiting your responce vault101-syslog-20231015-1913.zip vault101-diagnostics-20231015-2013.zip
  20. thank you this is probley why it didn't move the last corpted files, mover has just completed and by the looks of it the only files left is a some appdata files for drop box, the last time it corpted it was my plex DB which i managed to save so followed your sprit guide and now run a backup of this daily
  21. also any idea on how to work out what files are corpted ?
  22. hello not a problem i will do that in a min and then update this upon the next crash this might not be for days but i will quote you so you get a notification
  23. ever since i upgrade to 6.12 on wards i have been experiencing random crashes where everything is completely unassessable when it crashes there is no display on a montior and also no access via url or putty, it will ping but that's about all it will do which this then results in me having to force a power down, last time i done this the cache drive got corpted but i was able to run some command and find out what folder was the problem but for some reason this time i am not able to do so can someone please review the daigs logs and sys logs to provide input to what has caused this ? current unraid version is 6.21.2 i can try upgrade to the next version but need to find out where the corrpted file is on my cache so i can do this so if anyone is able to work this out or provide a command to locate this corpted files that would be great this is the check cache spript i created and info it provided FYI the below is the first run of the spript after boot up there scan is done hourly so didn't want to upload it since its the same input each time Script Starting Oct 11, 2023 22:47.01 Full logs for this script are available at /tmp/user.scripts/tmpScripts/checkcachedrives/log.txt [/dev/sde1].write_io_errs 0 [/dev/sde1].read_io_errs 0 [/dev/sde1].flush_io_errs 0 [/dev/sde1].corruption_errs 0 [/dev/sde1].generation_errs 0 [/dev/sdg1].write_io_errs 0 [/dev/sdg1].read_io_errs 0 [/dev/sdg1].flush_io_errs 0 [/dev/sdg1].corruption_errs 16 [/dev/sdg1].generation_errs 0 Script Finished Oct 11, 2023 22:47.04 I have just disabled docker so i can run mover to remove everything from my cache drives cause last time it left the corpted files on cache please any assistance would be great this random crash is very random sometimes it can run for days some times weeks and then month or to before a crash vault101-diagnostics-20231013-0905.zip vault101-syslog-20231013-0804.zip
  24. no problem starting to copy stuff from the pool over to array just now, and re-format the pool and copy back over, hopefully all goes sucesffully cause alot more errors in the logs this morning
  25. sorry delayed response i went down the good old linux command route and CD into the dir then removed the dir that way
×
×
  • Create New...