Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

7 Neutral

About Maticks

  • Rank
    Advanced Member
  • Birthday 12/28/1981


  • Gender

Recent Profile Visitors

891 profile views
  1. Wondering if you can use ZFS on the Cache pool of 2xM.2 Drives the same size. I tried running with BTRFS with two drives and always end up with one drive going into read only mode randomly. Then having to rebuild the mirror again. I always end up ripping out one drive and run unprotected off a single Cache drive in BTRFS but it seems only happy with that setup. Not sure if it was a bug but i haven't tried in a good year now, i want to though fix this protection issue. I noticed ZFS had some support here for data drives, so can that be extended to the Cache Pool as well ? BTRFS seems a bit broken at least for me.
  2. When VM's are running on BR0 traffic will not follow across the default gateway out of the unraid server. From the local LAN ran it works fine. It also breaks any Dockers running on BR0 from working on the LAN at all. When you stop all the VM's running on BR0 the Dockers running on BR0 all return to normal.
  3. This problem can be annoying. Delete the boot bios from the LSI card. You cannot boot from a drive off the LSI card anymore. But you can boot off your USB ports and system SATA ports. If you place more than one LSI controller in your system you have to do this. Don't know why but the bios on the controller fights and disk's go into error randomly. You simply run the flash tool on the LSI card and tell it to delete the bios. Go to the boot section and set the first boot to the UEFI shell and restart. From the SHELL> prompt, type in this command sas2flash -list save the results of this output by your preferred method. From the SHELL> prompt, type in this command sas2flash -o -e 5 to erase the boot services area of the flash chip. And one last time, type in sas2flash -list and verify the Bios Version now reads N/A
  4. I'm back on 6.6.7 really hoping for a fix soon.
  5. That seems a bit odd.. disks are mounted by Serial Number. Is this your first reboot since installing Unraid? It looks like your UUID's are duplicate which usually only happens when a drive is being rebuilt your log entry is below. But i have never see UUID's for all the disks. You should be able to mount your FS though under Unassigned Devices at the bottom. Then load up terminal and see if your FS Data is in tact, if you cannot mount the FS under Unassigned Devices then your data is possibly gone or you will need to generate new UUID's from the below command. xfs_admin -U generate /dev/sdX1 Don't forget the 1 in the end. I don't know what cause your issue though. Aug 3 23:53:12 BigRig emhttpd: shcmd (542): mkdir -p /mnt/disk4 Aug 3 23:53:12 BigRig emhttpd: shcmd (543): mount -t btrfs,xfs,reiserfs -o noatime,nodiratime /dev/md4 /mnt/disk4 Aug 3 23:53:12 BigRig kernel: XFS (md4): Filesystem has duplicate UUID a6a54d96-2b0b-425f-9bd1-553450b28931 - can't mount Aug 3 23:53:12 BigRig root: mount: /mnt/disk4: wrong fs type, bad option, bad superblock on /dev/md4, missing codepage or helper program, or other error. Aug 3 23:53:12 BigRig emhttpd: shcmd (543): exit status: 32 Aug 3 23:53:12 BigRig emhttpd: /mnt/disk4 mount error: No file system
  6. the only way i managed to fix my issue is rolling back to before 6.7.x and everything is working smooth again. I tried dropping my Mover Priority whatever is causing it seems to be more than just a priority setting.
  7. Hi All, I know some changes took place in the Kernel during the 6.7.0 Update in regards to I/O. I run Plex on Ethernet from my Unraid Server to my TV and also Direct Play so no Transcoding is taking place. If Mover is running the Video Pauses every few seconds till it stops. I have setup Mover Tuner to try and lower the Priority of Mover. I thought it did work but it turns out it hasn't. I tried using reconstruction write instead of auto to see if turbo would fix it, but nope it doesn't seem to be a write speed issue. Plex is also running out of ram for /tmp. my SSD is an NVME Drive so its not a speed issue on the SSD either. My settings are attached. So i decided to try rolling back to before 6.7.0 and after a decent amount of testing everything works flawlessly. Is there anyway to run 6.7.2 with whatever changed before 6.7.0 that broke I/O when mover is running ? Seems to be a common recently topic we are all having the same issue, so something has changed in the last release update.
  8. Really good Squid Plugin to install for you going forward. A Cache Pool is good redundancy but a backup is always better.
  9. So.. the docker updates, upgrade the docker itself it doesn't perform the upgrade on Nextcloud. You can do it through the webui but i find the wheels fall off that sometimes and it becomes a pain. I prefer doing mine through the CLI its also easier to fix if something goes wrong if your in the CLI. Open a console on the docker image. cd /config/www/nextcloud/updater/ root@bd32ce1bcd66:/config/www/nextcloud/updater# sudo -u abc php updater.phar This will launch the CLI upgrade tool which will tell you what version you are and you can do the upgrade. Nextcloud Updater - version: v14.0.2RC2-7-g57268cb Current version is 15.0.5. Update to Nextcloud 16.0.3 available. (channel: "stable") Following file will be downloaded automatically: https://download.nextcloud.com/server/releases/nextcloud-16.0.3.zip Steps that will be executed: [ ] Check for expected files [ ] Check for write permissions [ ] Create backup [ ] Downloading [ ] Verify integrity [ ] Extracting [ ] Enable maintenance mode [ ] Replace entry points [ ] Delete old files [ ] Move new files in place [ ] Done Start update? [y/N] y Info: Pressing Ctrl-C will finish the currently running step and then stops the updater. [✔] Check for expected files [✔] Check for write permissions [✔] Create backup [✔] Downloading [✔] Verify integrity [✔] Extracting [✔] Enable maintenance mode [✔] Replace entry points [✔] Delete old files [✔] Move new files in place [✔] Done Update of code successful. Should the "occ upgrade" command be executed? [Y/n] Y sit back, drink a coffee and wait for it to finish. it can take a while before there is any output here. And then say No here Keep maintenance mode active? [y/N] N If you say yes you need to run another command to disable maintenance mode and get Nextcloud back up and running. This is for if you want to do other things after the upgrade before making Nextcloud live again. First connect to the webui you will need to do a quick start upgrade button but it just does a DB check and starts.
  10. yeah i did, i downloaded 6.7.2 which seem to be a lot better with the E1000 driver in ESXi and boom it just worked. you do have to kill bonding though in /boot/config/network.cfg and change it to No then reboot and it works.
  11. i had this exact problem, the cause of it was i had two LSI Cards installed and they fight over the boot BIOS. when you have one installed there is no issues, when you have two installed it randomly locks up and won't boot just a black screen after bios screen. It doesnt present the LSI Boot Menu because both cards are fighting for the boot slot. Sometimes it will boot up after a few restarts, work for a few minutes or hours then disks will randomly get read errors, sometimes when you boot up some disks will just be disabled and not be there. You can use the lsi flash tool to delete just the boot bios, the down side is you cannot boot off a SATA Drive on the LSI cards anymore. However with unraid you are booting off USB so thats no issue. I deleted both off my LSI cards so it would stop happening. This caused no end of issues for me, i replaced my PSU, Motherboard, CPU, Memory and was still happening after a long time i discovered a post about two LSI Cards. https://www.ixsystems.com/community/threads/lsi-9207-8i-can-i-erase-just-the-bios-leave-the-fw.60861/ https://forums.servethehome.com/index.php?threads/lsi-9211-8i-it-mode-stuck-during-loading.8183/ You will find a heap of articles around about it.
  12. That usually hits all disks on that 5V Cable though. I guess its looking for paterns if its two disks what do those two disks share? Same Controller or same 5V Cable, process of elimination.
  13. I've ran into these types of problems before. Unfortunetly it can be a few things, a Disk failure is always possible but a second disk running into the same issue at the same time is unusal. Unless there has been some kind of event like vibration that damaged both drives. Quick way to rule this out try moving the SATA Drives to the onboard controller or if they are on the onboard SATA Controller try moving them to the LSI Card. It could be a data cable issue or controller issue. If all drives were running into an issue it could be RAM or PSU but given its only two drives i'd look at your SATA Data Cables or Controller.
  14. got it working changed it to a sata controller
  15. Does anyone know what setting i need to set to see a Virtual Disk in Unraid. I got Network working and everything else i just can't see a disk, It comes up in Data Disk 1 Unassigned. I can't see the disk in dmesg. I've attached a screenshot of my ESXi settings, any pointers ?