LoyalScotsman

Members
  • Posts

    65
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

LoyalScotsman's Achievements

Rookie

Rookie (2/14)

3

Reputation

  1. marked your usb as soltuion but once i changed them over to usb3 it was back to normal speeds
  2. yeah i tried that no joy so stopped the rebuild and juggled the usb caddle around until i get a new raid card, but turns out the slow speed was cause i used a usb 2.0 instead of the 3.0 port now back up to 100mb/s - 150mb/s
  3. is there away to get docker running again without a reboot ?
  4. never had the issue before hand but i was thinking that the new USB HHD holder i got might be causing it to run very slow
  5. Myself again I have acquired 2 8tb disks so was putting them both into parity so I can upgrade down the line but my estimated rebuild speed shown as 19.9 MB/sec "see screenshot" the history of parity checks are normally around 100 MB/s. "see screenshot" Disk specs New parity 1 disk - ST8000NM012A - RPM 7200 Parity 2 - MG08ADA400NY - RPM 7200 Disk 1 - MG08ADA400NY - RPM 7200 Disk 2 - WD40EFRX - RPM 5400 Disk 3 - MG08ADA400NY - RPM 7200 Old party disk going into array - WD40EZRX - RPM 5400 Parity 2 will be another ST8000NM012A - RPM 7200 disk not yet connected So i know it aint the RPM 5400 causing this slow down since not been an issue in the past there is one error shown but i assume that is due to the new disks temp being hot logs/daigs before i stopped docker - vault101-diagnostics-20240312-1033.zip vault101-syslog-20240312-1032.zip Stopped docker cause there was some activity from plex but made no difference to the speeds Tried to start docker again but I has failed to start don't want to reboot cause the rebuild in progress unless someone advised to reboot but can someone assist here please ? EDIT logs after i tried to start docker vault101-syslog-20240312-1055.zipvault101-diagnostics-20240312-1055.zip
  6. thank you i shall do this but another issue occurred so will post this and await response
  7. So i have done the above and errors cleared hopefully it also stops the pop up when logging in that occurs ever hour with regards to COW my appdata share is set to auto shares system and domains is set to no how do i change this cause it appears greyed out I assume this COW option is only for shares directly saved on the cache ?
  8. the script appears to still be picking up errors is there a command to clear this ? Script Starting Mar 08, 2024 19:47.01 Full logs for this script are available at /tmp/user.scripts/tmpScripts/checkcachedrives/log.txt [/dev/sde1].write_io_errs 0 [/dev/sde1].read_io_errs 0 [/dev/sde1].flush_io_errs 0 [/dev/sde1].corruption_errs 0 [/dev/sde1].generation_errs 0 [/dev/sdk1].write_io_errs 2668196 [/dev/sdk1].read_io_errs 3936 [/dev/sdk1].flush_io_errs 0 [/dev/sdk1].corruption_errs 1779689 [/dev/sdk1].generation_errs 2592 Script Finished Mar 08, 2024 19:47.03
  9. there is no NOCOW shares i have replaced the cable for /dev/sdk i will montior just thought put a post but so far it is looking good so as you say that disk must have dropped offline
  10. Was about to add a new 8tb disk then noticed an error pop up when logging in saying cache error not first time i had this so its annoying but i went to have a look at disks and one shown failed but unriad wasn't responding to any changes that i was trying to do so i ended up forcing a reboot and now both cache disks appear to run smart scans and say no issues Logs before the reboot vault101-diagnostics-20240308-1515.zip vault101-syslog-20240308-1514.zip logs after reboot vault101-diagnostics-20240308-1534.zip vault101-syslog-20240308-1534.zip also checked disk logs and see the below cache disk sde cache 2 disk sdk Also i have tried a different sata cable on the disks to see if it helps but to early to tell esscailly with the current errors parity check in process cause i rebooted it to get it to respond this is the scipt results for the cache check Script Starting Mar 08, 2024 15:47.01 Full logs for this script are available at /tmp/user.scripts/tmpScripts/checkcachedrives/log.txt [/dev/sde1].write_io_errs 0 [/dev/sde1].read_io_errs 0 [/dev/sde1].flush_io_errs 0 [/dev/sde1].corruption_errs 0 [/dev/sde1].generation_errs 0 [/dev/sdk1].write_io_errs 2668196 [/dev/sdk1].read_io_errs 3936 [/dev/sdk1].flush_io_errs 0 [/dev/sdk1].corruption_errs 57 [/dev/sdk1].generation_errs 0 Script Finished Mar 08, 2024 15:47.03 So i have ran a scrub and it has gave me the following UUID: 5f5d37ad-1efa-46da-b974-d876dfdb375a Scrub started: Fri Mar 8 16:05:10 2024 Status: finished Duration: 0:43:16 Total to scrub: 142.90GiB Rate: 56.34MiB/s Error summary: verify=2592 csum=1678144 Corrected: 1680736 Uncorrectable: 0 Unverified: 0 is there an indication to why my cache has corrupted ?, i am am happy to move everything off my cache again to see how that goes cause it leaves corrupted stuff behide but looking for input before i do this incase there is a suggestion any help would be great ? thank you is there an indication to why my cache has corrupted ?, i am am happy to move everything off my cache again to see how that goes cause it leaves corrupted stuff behide but looking for input before i do this incase there is a suggestion any help would be great ? thank you EDIT after the scrub it appears the errors have stopped will update a post later on but would still like input please
  11. So the array finished its parity check and after reboot all came back, this is the first i seen this before is it a bug with the 6.12.2 ?
  12. A disk1 dropped offline and came back with disabled disk so followed steps on the forum and powered down server replaced sata cable with a new one, then done steps to clear the disabled error on disk1 but upon bringing array back up and starting the rebuilt process of the disk1 i have noticed the below 1. docker server failed to start and within settings it shows 2. i only have 3 shares where i should have 8 / 9 shares, the 3 shares present appear to have nothing in them and viewing shares over a network they appear empty but checking array it shown the data still present I have a rebuild for disk 1 is in process so cannot reboot at this moment and time this might fix it but opened this help request in case it doesn't and looking for input Also noticed the following is spamming my system logs Daigs logs attached Also noticed when going into backup/restore appdata and going to the restore option i get this but i assume its cause docker aint running I just hope after a rebuild is done and a reboot and it comes to life again cause otherwise not got a clue how to recover this, also this server was running for 3 month without an issues and the disk1 issue happened on the 5th but was only noticed today vault101-diagnostics-20240108-1713.zip
  13. I have not had the time to look further into it so no issue still an issue with virgin and i run through nordvpn but does not want to play ball with virgin. how you getting on with it ?
  14. Hello just thought i would give an update uptime is now 28 days so far seem more stable with the macvlan change, but will contuine to montior it but so far so good
  15. ok i will implement the change from MACvlan to IPvlan to test this fingers crossed this fixes it i have also checked that link and ran the command to clear the errors which didn't help, had to power down and reset the cache disks, so going to look into the cable that's on cache disks just incase