pballs

Members
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

pballs's Achievements

Noob

Noob (1/14)

1

Reputation

  1. unfortunately there is nothing in the IPMI event viewer, nor on any of the motherboard sensor/readings.
  2. HI all Suddenly getting the following error in unraid logs: kernel: EDAC MC0: 1 CE ie31200 CE on mc#0csrow#3channel#1 (csrow:3 channel:1 page:0x0 offset:0x0 grain:8 syndrome:0xf2) I run ECC memory, but guessing something is amiss with one of the dimms - can anyone assist with working out which DIMM is reporting the issue? Thanks in advance
  3. oddly, a restart of the docker service cleared the issue..
  4. looks like the issue happens immediately after a CA Backup/Restore Operation: :12:42 homenas4 CA Backup/Restore: ####################### Jul 14 05:12:42 CA Backup/Restore: appData Backup complete Jul 14 05:12:42 CA Backup/Restore: ####################### Jul 14 05:12:42 CA Backup/Restore: Deleting /mnt/user/CommunityApplicat ionsAppdataBackup/[email protected] Jul 14 05:12:42 CA Backup/Restore: Backup / Restore Completed Jul 14 05:12:42 kernel: BTRFS warning (device loop2): csum failed root 368 ino 4162 off 53346304 csum 0xbf0b209e expected csum 0x20aadd8c mirror 1
  5. Getting: BTRFS warning (device loop2): csum failed root 368 ino 4162 off 53346304 csum 0xbf0b209e expected csum 0x20aadd8c mirror 1 warning I had this issue a while ago - recreated docker.img and re-installed all apps - and it ran fine for a month or so, and its happened again now - is there something else i should be doing to fix this? Diags attached and thanks in advance diagnostics-20210714-1326.zip
  6. Spot on - deleted and rebuilt docker.img and re-installed apps from previous apps and all working again
  7. Hi all, Suddenly getting constant BTRFS warnings 'BTRFS warning (device loop2): csum failed root 1637 ino 24041 off 724992 csum 0x9240a8a2 expected csum 0x31960657 mirror 1' Is this on my docker.img (loop2)? Would a shutdown and delete of docker image file and reinstall of apps via previous apps sort this? Diags attached diagnostics-20210623-1653.zip
  8. Thanks all, swapped drives, new config, currently rebuilding parity (18 hours to go)
  9. Hi there, I have 4 x 4tb drives at the moment (1 parity, 3 data). Not a lot on there at the moment, so all data is actually on disk 1. I've now got some 14 tb drives to replace them all. My plan is: 1) stop array, 2) new config - assign just the drive that has the data on (so no parity and the 2 x empty drives unassigned) - 3) start array 3) Shutdown server. 4) Physically replace parity drive and the 2 empty data drives with the 14tb drives 5) Start server 6) New config - and assign a 14tb to the parity slot, and the other 2 x 14tb's as disks 2 and 3 (so ive still got the 4tb drive in disk1) - dont start parity rebuild 7) Unbalance copy the data from disk 1 (old 4tb) to disk 2 (new 14tb) 8 ) stop array - new config, and assign the remaining 14tb to disk1 9) start array and parity rebuild I know i will be without parity for a while but thats not a problem. Is this sort of right? Thanks in advance
  10. its under Tools -> Diagnostics -> Download
  11. thanks, will give this a try
  12. Good Morning All, Just had a funny one on my unraid server. Noticed PLEX had stopped responding, so logged onto my unraid server, and UNraid was alerting saying my NVME cache drive was missing: Unraid Cache disk message: 25-11-2020 03:19 Warning [HOMENAS4] - Cache pool BTRFS missing device(s) Samsung_SSD_970_EVO_Plus_1TB_S4EWNMFN818406F (nvme0n1) The drive was still listed in the dashboard, but was set as spundown (grey not green) - it wouldnt spin up, and it looks to have been set as read only. I collected the logs, reboot didnt fix the issue (the drive was now no longer listed at all), but a shutdown and poweroff/on of the system worked, the drive was now show, but i had to stop the array and add it back in as the cache drive. I am guessing maybe faulty hardware - the server had been up for 15 days with no issues prior to that. Could someone please have a look at the diag logs to see why this happened? homenas4-diagnostics-20201127-0924.zip
  13. ah, that might be it they are 4tb drives, with only about 40gb on one drive in total - i thought split level should put files across drives irrelevant of disk space? (so, say in my example above, season 1 on drive 1, season 2 on drive 2 etc?) or have i understood that wrong?
  14. Hello All, So i have a strange issue with Split Level - i have the following structure: User Share = TVSeries Folder = SeriesName Folder = Season The user share is set at High Water allocation, and i have it set as split level 2, so shouldn't it then put each season onto a different disk? (i have 4 drives, 1 parity) - but it doesnt seem to be doing that, all series and seasons are getting put on disk 1.. what am i missing? Thanks in advance!