mrpainnogain

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by mrpainnogain

  1. instead writing the whole thing, i just add pcie_aspm=off on the config. it's been going fine for the past 2hrs ++ . i will report if there is another error. in the mean time , can you please check on my new diagnostic, i got one NVME with series 2N462LQ5CSYH that always go offline after several minutes/hours connected to the system. do you happen to know why? asas-diagnostics-20240216-1638.zip
  2. may i know which power management option? because i think i am experiencing similar issues.
  3. okay , after running for approximately 1 hr. it happens again . exactly the same thing also the NVME can not be accessed any more as well. also i posteed asas-diagnostics-20240216-1334.zip
  4. i have been experiencing I/O device error with different NVME and also different nvme slot on the motherboard. suddenly after running for more than 5 days, one of my NVME got that warning and can not be accessed anymore. and i had to restart the unraid, luckily the problem did not occur back until now. i initially thought the issue only happened on BTRFS file system, but 2 my NVME got the same warning even on the XFS filesystem. what might be the issue here? is it bad motherboard? i have 3 NVME on the mobo, only one that seems to never have any problem. asas-diagnostics-20240216-1159.zip
  5. i did that but i did not solve it, the detected space only 20 GB. i have already restarted the docker as well
  6. my previous scrypted docker somehow got corrupted and so i created the new one. i made a path to the footage folder which located on /mnt/user/SCRYPTED/footage, that share is specifically only on disk 4 which is a 1 TB HDD. but it is only detecting 20 GB on my scrypted docker and also i could not find the previous footage. am i making the wrong path for the footage ? i actually just copy on exactly the same what the previous docker had. asas-diagnostics-20240212-0319.zip
  7. hi guys just this morning i got log which tells that one of my NVME that contains the appdata is somehow corrupted. i managed to move the shares to the nvme where the data was originally stored, but instead the docker started fresh. i did not change anything on the docker container or host path. am i missing something here? or somehow the files got overwritten ? To be more specific, couple days ago the appdata share was on "cache" pool then i changed the share path to " app_data" pool. but then the app_data pool got corrupted. then i change the share of the appdata back to the cache. asas-diagnostics-20240211-1131.zip
  8. yes the device is OK by itself, as i am currently using it. you are right, the device used to be part of the same pool, but due to btrfs corruption , i took it of from the motherboard and since that, every time i connected that device, the cache pool got the " missing disk" notification. when u say wipe it, do u mean format the NVME before starting the array?
  9. there we go , Label: none uuid: 7f94e2af-8931-47e7-8457-d4946fa8df48 Total devices 1 FS bytes used 597.08GiB devid 1 size 953.87GiB used 654.06GiB path /dev/nvme0n1p1 Label: none uuid: ebf65eba-8218-475e-94e1-6aa9c3e2dc13 Total devices 1 FS bytes used 5.41GiB devid 1 size 20.00GiB used 8.02GiB path /dev/loop2 Label: none uuid: 3d8ee1e7-2c37-4667-9518-d41824ba9b65 Total devices 1 FS bytes used 2.41MiB devid 1 size 1.00GiB used 126.38MiB path /dev/loop3
  10. okay back here. how do i resolve this issue? also attached the diag. here's the diag with and without the new NVME attached on the system. basically i want to use the NVME with series 2L082xx stays the same on the system as it has my current VM and others. but i want to create a new pool for the other NVME starting with series 2N462XX. asas-diagnostics-20240208-1333.zip asas-diagnostics-20240208-1406.zip
  11. okay , finally it worked again now. all i had to do was updated the BIOS my mobo is z590 vision g and the version was F2 and now it became F8. i can boot again to unraid. it is weird though, i didn't do any softwares update. all i did was just added another NVME
  12. it just stuck like this IMG_8091.MOV IMG_8091.MOV IMG_8092.HEIC
  13. hi there , i have already done the CMOS reset , it was confirmed by having " bios has been reset" notification when i rebooted . but the same thing happened again , after i chose the usb. the screen stuck at black screen, i haven't even reached the blue background options yet.
  14. ok i did boot to the usb, everything went smooth on my other PC. what might be the issue here? i could see everything on the GUI and it seems like there is no problem. just that the disks are missing as expected. also posted my latest diagnostic on different PC ( just in case needed) asas-diagnostics-20240208-0105.zip
  15. is it safe to do so? i tried plugging in the usb to other pc , it can read the data inside without any problem. i did not try to boot into the usb tho
  16. Can i put the usb to other pc? May be view some log?
  17. i did try, still the same issue. just black screen with dash on the corner. may be somehow the usb config got messed up?
  18. i always press boot key and chose the usb manually, after that i chose which mode unraid i want to boot to. but this time after i chose the usual USB. it just black screen. i actually just did a back up before i added nvme on it, do i have to remove the nvme first? or can i just switch to other usb to boot straight away? to be more specifics i added one nvme made new pool on it, install VM on it. shutdown the system. then i added another NVME ( so total 3 NVME's) and after that it wont boot at all
  19. in the mean time , i just got another problem. somehow my unraid wont boot at all, it just blanked screen when i chose the USB that was used for unraid. what happened was that i added another NVME to the motherboard . connected monitor on it , and only shows a "-" dash on the corner left. any solution?
  20. sorry i am a bit inexperienced with unraid, how do i do that? since i can't format the disk without the array started
  21. hmm any idea what might be the issue here? because with only one NVME everything went smooth. but the issue came out every time i added another NVME ( doesn't matter the slot on mobo)
  22. hey i have another question , i just put another NVME on the system . but now the the cache that was supposed to be labelled as nvme0n1 became "nvme1n1" and it triggered the disk missing warning again and the cache became " unmountable: unsupported file system" . how do i force the name to stay on "nvme0n1" on the first cache? asas-diagnostics-20240207-1402.zip
  23. Yes, since i am afraid in my case is there is something wrong with the way i add the 2nd NVME to the cache pool to be configured as BTRFS. i have already sent the NVME back to the shop and they did not find anything wrong with that. in the mean time i want to test the NVME as "unassigned device" and install VM on that, just to make sure that the device is actually in good condition. other than BTRFS format, is there anyway i can mirror 1 cache to the other? may be copying the contents of one cache to the other one while services like dockers and VM's are off on regular basis?