Maticks

Members
  • Posts

    323
  • Joined

  • Last visited

Everything posted by Maticks

  1. that makes sense, these intel ones aren't the best for wear time thats for sure. will need to look for a replacement brand that does more writes.
  2. That might be part of the issue i can't start a smart on the drive short or extended manually.
  3. I thought when NVME drives died they just stopped working. I can't seem to find any issue with the drive but it's failed smart. Do i just ignore the error or should i change out the NVME drive? I think the drive goes into read only when its dead from what i was reading online. I have attached the smart logs. vault-smart-20210201-1358.zip
  4. mine does say Raid1 for both. So it must be working as expected then.
  5. How do you know that the second cache drive you added is a mirror. Balance ran when i added the second drive to the cache pool. But when i ran a Full Balance the following day i still see in the the syslog these messages. Jan 20 09:18:52 Vault kernel: BTRFS info (device nvme0n1p1): found 5976 extents Jan 20 09:18:52 Vault kernel: BTRFS info (device nvme0n1p1): relocating block group 7241241722880 flags data|raid1 Jan 20 09:19:03 Vault kernel: BTRFS info (device nvme0n1p1): found 14253 extents Jan 20 09:19:04 Vault kernel: BTRFS info (device nvme0n1p1): found 14252 extents Jan 20 09:19:04 Vault kernel: BTRFS info (device nvme0n1p1): found 9917 extents Jan 20 09:19:07 Vault kernel: BTRFS info (device nvme0n1p1): found 6524 extents Jan 20 09:19:08 Vault kernel: BTRFS info (device nvme0n1p1): relocating block group 7240167981056 flags data|raid1 Jan 20 09:19:20 Vault kernel: BTRFS info (device nvme0n1p1): found 7945 extents Jan 20 09:19:21 Vault kernel: BTRFS info (device nvme0n1p1): found 7945 extents Jan 20 09:19:21 Vault kernel: BTRFS info (device nvme0n1p1): relocating block group 7239094239232 flags data|raid1 Jan 20 09:19:35 Vault kernel: BTRFS info (device nvme0n1p1): found 11760 extents Jan 20 09:19:36 Vault kernel: BTRFS info (device nvme0n1p1): found 11760 extents Jan 20 09:19:36 Vault kernel: BTRFS info (device nvme0n1p1): found 5888 extents Jan 20 09:19:36 Vault kernel: BTRFS info (device nvme0n1p1): found 5888 extents Jan 20 09:19:38 Vault kernel: BTRFS info (device nvme0n1p1): found 5888 extents Jan 20 09:19:38 Vault kernel: BTRFS info (device nvme0n1p1): relocating block group 7238020497408 flags data|raid1 Does this mean that the data isn't mirroring properly on the second drive? or is this completely normal. The nvme device is my orginal cache drive ive added a second ssd as my cache2 drive.
  6. cd /tmp du -h that will give you a list in M or G how big the Transcode directory is. in the WebUI of Unraid though the Dashboard in Memory under RAM will show you your usage. Transcoding 1080p or 720p is pretty low Ram usage, if you are trascoding down 4K that can use a few GB's.
  7. Hi Guys, I've been putting off doing this for a while after some BTRFS issues with two SSD's on my Cache pool probably 2 years ago now. My intel 600p drive that has a lifetime writes of 144TB is now at 342TB written.. so i thought i better fix this now while it's all working. Does anyone have a suggestion on how i may load in my second SSD i've added it into the system and it's clean. But how do i add it into my cache pool as a mirrored disk so my Cache Drive has a redundant device? Is there an easy way to load this in. I see under Balance convert to Raid1 Mode but not sure if thats going to wipe the existing data?
  8. XFS "just works" its an old FS but its been around a long time.. it's pretty resistant to being corrupted from being turned off or a disk on its way out. BTRFS is newer it seems pretty stable, but when things go wrong the repair utils seem to be what lets it down. I've tried using BTRFS, always run into some kind of issue and end up back at XFS not just on Unraid other systems as well. It was not that great on my laptop. There is no real performance advantage when using XFS vs BTRFS, the snapshot stuff is nice to have. but if you want that i'd more go to ZFS on Unraid. If you want to keep it simple XFS, Dual Parity, if file corruption is of concern then run Dynamix File Integrity. Thats the cheapest way to get the most space with protection and stablity. I would go the whole hog of ZFS if you want the Protection and Snapshots with great utils to repair the filesystem in the event of an issue, but you will be bound to the rules of ZFS Pool system. You can load the ZFS Plugin in Unraid and off you go, Youtube ZFS Unraid and you will see a few videos on how to set it up.
  9. Even with Defaults creating a Windows 10 VM i get this error when clicking create. Anyone know what is causing it? When i create any other VM type it seems to work fine. internal error: process exited while connecting to monitor: 2020-05-21T23:06:15.068081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"root","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: A regular file was expected by the 'file' driver, but something else was given.
  10. Even with Defaults creating a Windows 10 VM i get this error when clicking create. Anyone know what is causing it? When i create any other VM type it seems to work fine. internal error: process exited while connecting to monitor: 2020-05-21T23:06:15.068081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"root","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: A regular file was expected by the 'file' driver, but something else was given.
  11. Maybe it is just a controller difference, not sure all my other 9 drives have WD-WCXXXXXXXX for their serial number. It seem to have passed though woth two Uncorrect Errors. Kicked off an Extended Smart as well. Event: Preclear on 3704212 Subject: Preclear: PASS! Preclearing Disk 3704212 (/dev/sdv) Finished!!! Description: Preclear: PASS! Preclearing Disk 3704212 (/dev/sdv) Finished!!! Cycle 1 of 1 Importance: normal Disk sdv has successfully finished a preclear cycle! Ran 1 cycles. Last Cycle\`s Pre-Read Time: 8:38:14 @ 128 MB/s. Last Cycle\`s Zeroing Time: 7:51:33 @ 141 MB/s. Last Cycle\`s Post-Read Time: 8:39:03 @ 128 MB/s. Last Cycle\`s Elapsed TIme: 25:07:36 Disk Start Temperature: 31 C Disk Current Temperature: 33 C S.M.A.R.T. Report ATTRIBUTE INITIAL CYCLE 1 STATUS 5-Reallocated_Sector_Ct 0 0 - 9-Power_On_Hours 1 27 Up 26 183-Runtime_Bad_Block 0 0 - 187-Reported_Uncorrect 0 2 Up 2 190-Airflow_Temperature_Cel 30 33 Up 3 196-Reallocated_Event_Count 0 0 - 197-Current_Pending_Sector 0 0 - 198-Offline_Uncorrectable 0 0 - 199-UDMA_CRC_Error_Count 0 0 - SMART overall-health self-assessment test result: PASSED
  12. I think i got done buying a counterfeit drive hrmm.. The Serial number is very weird... this drive "WDC_WD40EFRX-68N32N0_3704212" all my other WD Red 4TB Drives "WDC_WD40EFRX-68N32N0_WD-WCC7K6ELLFVA"
  13. it's a WDC_WD40EFRX WD Red 4TB drive, the guys gone out of business so i can't even take it back My only option is RMA through Western Digital, i guess ill wait for the preclear to finish see if there are bad results.
  14. I bought a new WD Red Drive and started my preclear, the Smart Info has it new with no errors. But starting off the preclear the speed is holding 164MB/sec but every 2 seconds it makes this sectoring noise for 1 second its about 5-6 clicks its the same pattern over and over. I can't remember any of my other precleared drives making noise while preclearing. Anyone know if this is a bad sign, i don't see any errors as of yet im only at stage 1 of preclearing.
  15. I have a 9210-8i that wouldn't let me flash it from IR to IT mode throwing some error messages. Then after a reboot when i do -list it says controller is not operational asking for a firmware file. neither 2108it or ir seem to work. It says a firmware hostboot is required and its stuck in RESET state. Looks like it needs a file i can't find to get it out of this state, anyone else this before? It is an LSI SAS: SAS2008(B2) not sure if this B2 is the issue i haven't seen that before.
  16. Second Parity Check completed with 0 errors. Should i bin this disk when my new one arrives or is it ok to keep running it? It is 7 Years old coming up to 8 it kind of is close to end of life back in the old days before i knew about NAS drives.
  17. My Smart finished vault-smart-20200409-2146.zip
  18. attached is the Diag file. Looks like that Disk that had the errors this morning is throwing read errors now while the parity check is running. Noticing 32 errors so far vault-diagnostics-20200408-1901.zip
  19. Hoping someone here can help i have run a parity check with the write corrections box ticked. Included some screenshots. On the 12-01-2020 5847 Errors were corrected after running the check. On the 12-02-2020 5849 Errors were corrected after running the check. Now i am running it today and its most the way through i am back at 5849 again. My Disk 8 Threw some Smart messages "offline uncorrectable is 8 " and current pending sector is 8 . I just ordered a new one and will pull it out and replace it with a new one once it arrives. my issue is will my array rebuild here properly? I an running this check now i want to make sure my data is correct before i pull the disk out. Anyone about to help out work where these errors are coming from ?
  20. Wondering if you can use ZFS on the Cache pool of 2xM.2 Drives the same size. I tried running with BTRFS with two drives and always end up with one drive going into read only mode randomly. Then having to rebuild the mirror again. I always end up ripping out one drive and run unprotected off a single Cache drive in BTRFS but it seems only happy with that setup. Not sure if it was a bug but i haven't tried in a good year now, i want to though fix this protection issue. I noticed ZFS had some support here for data drives, so can that be extended to the Cache Pool as well ? BTRFS seems a bit broken at least for me.
  21. When VM's are running on BR0 traffic will not follow across the default gateway out of the unraid server. From the local LAN ran it works fine. It also breaks any Dockers running on BR0 from working on the LAN at all. When you stop all the VM's running on BR0 the Dockers running on BR0 all return to normal.
  22. This problem can be annoying. Delete the boot bios from the LSI card. You cannot boot from a drive off the LSI card anymore. But you can boot off your USB ports and system SATA ports. If you place more than one LSI controller in your system you have to do this. Don't know why but the bios on the controller fights and disk's go into error randomly. You simply run the flash tool on the LSI card and tell it to delete the bios. Go to the boot section and set the first boot to the UEFI shell and restart. From the SHELL> prompt, type in this command sas2flash -list save the results of this output by your preferred method. From the SHELL> prompt, type in this command sas2flash -o -e 5 to erase the boot services area of the flash chip. And one last time, type in sas2flash -list and verify the Bios Version now reads N/A
  23. I'm back on 6.6.7 really hoping for a fix soon.
  24. That seems a bit odd.. disks are mounted by Serial Number. Is this your first reboot since installing Unraid? It looks like your UUID's are duplicate which usually only happens when a drive is being rebuilt your log entry is below. But i have never see UUID's for all the disks. You should be able to mount your FS though under Unassigned Devices at the bottom. Then load up terminal and see if your FS Data is in tact, if you cannot mount the FS under Unassigned Devices then your data is possibly gone or you will need to generate new UUID's from the below command. xfs_admin -U generate /dev/sdX1 Don't forget the 1 in the end. I don't know what cause your issue though. Aug 3 23:53:12 BigRig emhttpd: shcmd (542): mkdir -p /mnt/disk4 Aug 3 23:53:12 BigRig emhttpd: shcmd (543): mount -t btrfs,xfs,reiserfs -o noatime,nodiratime /dev/md4 /mnt/disk4 Aug 3 23:53:12 BigRig kernel: XFS (md4): Filesystem has duplicate UUID a6a54d96-2b0b-425f-9bd1-553450b28931 - can't mount Aug 3 23:53:12 BigRig root: mount: /mnt/disk4: wrong fs type, bad option, bad superblock on /dev/md4, missing codepage or helper program, or other error. Aug 3 23:53:12 BigRig emhttpd: shcmd (543): exit status: 32 Aug 3 23:53:12 BigRig emhttpd: /mnt/disk4 mount error: No file system
  25. the only way i managed to fix my issue is rolling back to before 6.7.x and everything is working smooth again. I tried dropping my Mover Priority whatever is causing it seems to be more than just a priority setting.