Maticks

Members
  • Posts

    323
  • Joined

  • Last visited

Everything posted by Maticks

  1. Hi All, I know some changes took place in the Kernel during the 6.7.0 Update in regards to I/O. I run Plex on Ethernet from my Unraid Server to my TV and also Direct Play so no Transcoding is taking place. If Mover is running the Video Pauses every few seconds till it stops. I have setup Mover Tuner to try and lower the Priority of Mover. I thought it did work but it turns out it hasn't. I tried using reconstruction write instead of auto to see if turbo would fix it, but nope it doesn't seem to be a write speed issue. Plex is also running out of ram for /tmp. my SSD is an NVME Drive so its not a speed issue on the SSD either. My settings are attached. So i decided to try rolling back to before 6.7.0 and after a decent amount of testing everything works flawlessly. Is there anyway to run 6.7.2 with whatever changed before 6.7.0 that broke I/O when mover is running ? Seems to be a common recently topic we are all having the same issue, so something has changed in the last release update.
  2. Really good Squid Plugin to install for you going forward. A Cache Pool is good redundancy but a backup is always better.
  3. So.. the docker updates, upgrade the docker itself it doesn't perform the upgrade on Nextcloud. You can do it through the webui but i find the wheels fall off that sometimes and it becomes a pain. I prefer doing mine through the CLI its also easier to fix if something goes wrong if your in the CLI. Open a console on the docker image. cd /config/www/nextcloud/updater/ root@bd32ce1bcd66:/config/www/nextcloud/updater# sudo -u abc php updater.phar This will launch the CLI upgrade tool which will tell you what version you are and you can do the upgrade. Nextcloud Updater - version: v14.0.2RC2-7-g57268cb Current version is 15.0.5. Update to Nextcloud 16.0.3 available. (channel: "stable") Following file will be downloaded automatically: https://download.nextcloud.com/server/releases/nextcloud-16.0.3.zip Steps that will be executed: [ ] Check for expected files [ ] Check for write permissions [ ] Create backup [ ] Downloading [ ] Verify integrity [ ] Extracting [ ] Enable maintenance mode [ ] Replace entry points [ ] Delete old files [ ] Move new files in place [ ] Done Start update? [y/N] y Info: Pressing Ctrl-C will finish the currently running step and then stops the updater. [✔] Check for expected files [✔] Check for write permissions [✔] Create backup [✔] Downloading [✔] Verify integrity [✔] Extracting [✔] Enable maintenance mode [✔] Replace entry points [✔] Delete old files [✔] Move new files in place [✔] Done Update of code successful. Should the "occ upgrade" command be executed? [Y/n] Y sit back, drink a coffee and wait for it to finish. it can take a while before there is any output here. And then say No here Keep maintenance mode active? [y/N] N If you say yes you need to run another command to disable maintenance mode and get Nextcloud back up and running. This is for if you want to do other things after the upgrade before making Nextcloud live again. First connect to the webui you will need to do a quick start upgrade button but it just does a DB check and starts.
  4. yeah i did, i downloaded 6.7.2 which seem to be a lot better with the E1000 driver in ESXi and boom it just worked. you do have to kill bonding though in /boot/config/network.cfg and change it to No then reboot and it works.
  5. i had this exact problem, the cause of it was i had two LSI Cards installed and they fight over the boot BIOS. when you have one installed there is no issues, when you have two installed it randomly locks up and won't boot just a black screen after bios screen. It doesnt present the LSI Boot Menu because both cards are fighting for the boot slot. Sometimes it will boot up after a few restarts, work for a few minutes or hours then disks will randomly get read errors, sometimes when you boot up some disks will just be disabled and not be there. You can use the lsi flash tool to delete just the boot bios, the down side is you cannot boot off a SATA Drive on the LSI cards anymore. However with unraid you are booting off USB so thats no issue. I deleted both off my LSI cards so it would stop happening. This caused no end of issues for me, i replaced my PSU, Motherboard, CPU, Memory and was still happening after a long time i discovered a post about two LSI Cards. https://www.ixsystems.com/community/threads/lsi-9207-8i-can-i-erase-just-the-bios-leave-the-fw.60861/ https://forums.servethehome.com/index.php?threads/lsi-9211-8i-it-mode-stuck-during-loading.8183/ You will find a heap of articles around about it.
  6. That usually hits all disks on that 5V Cable though. I guess its looking for paterns if its two disks what do those two disks share? Same Controller or same 5V Cable, process of elimination.
  7. I've ran into these types of problems before. Unfortunetly it can be a few things, a Disk failure is always possible but a second disk running into the same issue at the same time is unusal. Unless there has been some kind of event like vibration that damaged both drives. Quick way to rule this out try moving the SATA Drives to the onboard controller or if they are on the onboard SATA Controller try moving them to the LSI Card. It could be a data cable issue or controller issue. If all drives were running into an issue it could be RAM or PSU but given its only two drives i'd look at your SATA Data Cables or Controller.
  8. got it working changed it to a sata controller
  9. Does anyone know what setting i need to set to see a Virtual Disk in Unraid. I got Network working and everything else i just can't see a disk, It comes up in Data Disk 1 Unassigned. I can't see the disk in dmesg. I've attached a screenshot of my ESXi settings, any pointers ?
  10. I really do love the simplicity of Unraid for my home server uses. My question and request is probably far from what has been posted here. But i run an ESX Server in the Data Centre for my critical cloud apps. I have been unsucessful trying to get Unraid working on ESX using a USB thumbdrive booting in an VM. Are there any plans to release a cutdown Unraid Version for VM. I am happy to pay a license for it. I simply want a stripped down version of Unraid for Docker and VM Management. Every other Linux OS has a weird and frankly annoying way to manage Dockers, i really love Unraid's approach to this. In my research of trying to get this working there was a fair amount of interest in a VM version of Unraid. I would love to Beta test Unraid in a VM if you're ever looking for someone to do that.
  11. Thank you for adding that in Squid, i just tracked down my problems to Mover yesterday. Sent over a Beer your way.
  12. when i put config in network.cfg and reboot nothing seems to change. ifconfig shows only br0 and lo I cannot see any Ethernet Controllers in dmesg. Maybe previous versions of unraid had a driver that is no longer in the current build
  13. is there some CLI commands or config i can change to get it out of bonded and back to the interface directly ?
  14. yeah it is on br0. Kind of impossible to get the diagnostics given the server is a long way away. I have been unable to get eth1 to appear.
  15. Can anyone help with Unraid on ESXi 6.7. I have no working network driver after booting 6.6.7 of Unraid off a USB. i've tried E1000, VMXNET3 was the first i tried neither worked. When i boot up Ubuntu it works no issues I can't really flash the USB with an older version now its sitting in a server within the DC. Any thoughts?
  16. Thank you for explaining that. So i was thinking if i filled the VMDK i can expand it given i have 5TB of Array Storage. But i guess i just drop in a second VMDK or make sure i pick the correct size at the start.
  17. Next week i am deploying a Server in the DC behind a firewall. I am looking at running Unraid within ESXi. Looking at the methods to boot i will plug in an Unraid USB into the inbuilt USB Header. Booting from PLOP. I can't see any issues with this method it looks fine. I cannot find any info however on the Data Disk side of things. I was planning to create a 1TB VMDK on the Dell Managed Raid Storage within ESXi. Assigning this 1TB VMDK to the Unraid VM, will it show up as Disk1 to be assigned to Unraid? If i needed to increase the VMDK size later to say 2TB is Unraid going to be ok with that? I see a lot of people passing through controllers etc, but i am happy to let the Dell Server maintain the 14 Disk Array in Raid 6 on its own. I am mostly using Unraid for the Docker Functions and running some of the Dockers i have at home on my Unraid Server also here. I really love the way Unraid is simply to maintain and upgrade over a linux server, i am just looking at using the Service side of Unraid pretty much in a VM form. Either this is going to just work and thats why there is no info, or this isn't something many people are doing at all and it won't work. Any help is appreciated.
  18. thanks --log-opt max-size=50m --log-opt max-file=1 worked a treat.
  19. Does anyone know where the log location is for containers i clicked the Container Size section and found my logs are 6.45GB. But when i run around the container within a shell i cannot find where these logs are. Anyone know where they are kept? Name Container Writable Log --------------------------------------------------------------------- binhex-medusa 1.34 GB 101 MB 6.45 GB
  20. I have Intel 600p SSD drives in my unraid box. I have noticed i don't have the Media Wear Indication in the Drive Smart section. The Attribute is 233 according to Intel's website for the smart attribute. Once i apply that nothing happens in the Attributes, is there somewhere else to add it ?
  21. not arguing I just assumed you might run plex like some other people
  22. Don't use plex or turn off Maintenance ?
  23. Another option to add to the plugin which will make it even better. We have the check every hour on the Cronjob if not at the threshold or over that percent don't run the Mover. How about an exception, if all Data Disks are spun up already during the cronjob check you might as well run the Mover. Since 90% of people running this plugin are to not move unless at X Threshold to stop disks from being spun up, if they are already all spinning why wait to the threshold and force all drives to spin up then. Just an idea for offloading the data to the parity array when it makes sense too.
  24. I get what you are saying here, you could fill the cache drive to 100% while mover is running. I have experienced this and if you have unraid setup incorrectly dockers will crash which is probably what you are getting at here. Under Shares make sure Cache Drive is set to Prefer on any of the Shares you have set to Only. If the Cache drive fills too 100% any files that need to be changed will be written to the array, when mover runs next time it will have free space and move those files back to the Cache Drive. Its the same as wanting those files on your Cache Drive only but in the event of 100% full Cache the system has a place to keep operating. You have to also Include some drives from your array within the Share where you want this overflow to take place. The only thing that is going to happen here is any accesses to that file that is on the array instead of the cache drive will read at array speed. Once Mover runs again that file will now be at Cache Drive at Cache speed. Completely opposite in my opinion the Unraid setup between Cache and Array is amazing. If you use the Prefer option on your Share you will run into a bad day, once the Cache Drive nears 100% you will get I/O errors and at somepoint docker crashes or system lockup. If you are running into some weird problems like Disk Full when the Cache Drive is only at 70% or 80%, that is a BTRFS bug you can run a rebalance on the Cache Drive to clear the space. Look at your Cache Drive Used Space then click on the Cache drive on the left under balance you should see "Data, single: total=83.01GiB, used=80.65GiB" See if the total is around what your used space is in Unraid. You can manually press Balance and it will clean it up for you. I have a cron job that runs "btrfs balance start -dusage=75 /mnt/cache" every week just to clean it up. Might be useful actually if this plugin checked the Cache Folder in DF and compared it with the BTRFS Filesystem Database if they don't match run rebalance. Maybe another plugin should keep an eye on that, anyway i find the Cron job enough to do the job.
  25. not sure if i missed something here. But checking syslog there is no new data there at all saying that it didnt run for some reason. But i've tried a few setting changes and each hour mover doesn't start. Any pointers of something i might have missed or do i need to reboot after installing the plugin?