Jump to content

Maticks

Members
  • Posts

    324
  • Joined

  • Last visited

Posts posted by Maticks

  1. So.. the docker updates, upgrade the docker itself it doesn't perform the upgrade on Nextcloud.

    You can do it through the webui but i find the wheels fall off that sometimes and it becomes a pain.

     

    I prefer doing mine through the CLI its also easier to fix if something goes wrong if your in the CLI.

    Open a console on the docker image.

     

    cd /config/www/nextcloud/updater/

    root@bd32ce1bcd66:/config/www/nextcloud/updater# sudo -u abc php updater.phar 

     

    This will launch the CLI upgrade tool which will tell you what version you are and you can do the upgrade.


    Nextcloud Updater - version: v14.0.2RC2-7-g57268cb

    Current version is 15.0.5.

    Update to Nextcloud 16.0.3 available. (channel: "stable")
    Following file will be downloaded automatically: https://download.nextcloud.com/server/releases/nextcloud-16.0.3.zip

    Steps that will be executed:
    [ ] Check for expected files
    [ ] Check for write permissions
    [ ] Create backup
    [ ] Downloading
    [ ] Verify integrity
    [ ] Extracting
    [ ] Enable maintenance mode
    [ ] Replace entry points
    [ ] Delete old files
    [ ] Move new files in place
    [ ] Done

    Start update? [y/N] y

    Info: Pressing Ctrl-C will finish the currently running step and then stops the updater.

    [✔] Check for expected files
    [✔] Check for write permissions
    [✔] Create backup
    [✔] Downloading
    [✔] Verify integrity
    [✔] Extracting
    [✔] Enable maintenance mode
    [✔] Replace entry points
    [✔] Delete old files
    [✔] Move new files in place
    [✔] Done

    Update of code successful.

    Should the "occ upgrade" command be executed? [Y/n] Y

     

    sit back, drink a coffee and wait for it to finish. it can take a while before there is any output here.

    And then say No here

    Keep maintenance mode active? [y/N] N

     

    If you say yes you need to run another command to disable maintenance mode and get Nextcloud back up and running.

    This is for if you want to do other things after the upgrade before making Nextcloud live again.

     

    First connect to the webui you will need to do a quick start upgrade button but it just does a DB check and starts.

     

     

    • Like 5
    • Thanks 6
  2. i had this exact problem, the cause of it was i had two LSI Cards installed and they fight over the boot BIOS.

    when you have one installed there is no issues, when you have two installed it randomly locks up and won't boot just a black screen after bios screen.

    It doesnt present the LSI Boot Menu because both cards are fighting for the boot slot.

     

    Sometimes it will boot up after a few restarts, work for a few minutes or hours then disks will randomly get read errors, sometimes when you boot up some disks will just be disabled and not be there.

     

    You can use the lsi flash tool to delete just the boot bios, the down side is you cannot boot off a SATA Drive on the LSI cards anymore.

    However with unraid you are booting off USB so thats no issue.

     

    I deleted both off my LSI cards so it would stop happening.

    This caused no end of issues for me, i replaced my PSU, Motherboard, CPU, Memory and was still happening after a long time i discovered a post about two LSI Cards.

     

    https://www.ixsystems.com/community/threads/lsi-9207-8i-can-i-erase-just-the-bios-leave-the-fw.60861/

    https://forums.servethehome.com/index.php?threads/lsi-9211-8i-it-mode-stuck-during-loading.8183/

     

    You will find a heap of articles around about it.

  3. That usually hits all disks on that 5V Cable though.

    I guess its looking for paterns if its two disks what do those two disks share?

    Same Controller or same 5V Cable, process of elimination.

  4. I've ran into these types of problems before.

    Unfortunetly it can be a few things, a Disk failure is always possible but a second disk running into the same issue at the same time is unusal.

    Unless there has been some kind of event like vibration that damaged both drives.

     

    Quick way to rule this out try moving the SATA Drives to the onboard controller or if they are on the onboard SATA Controller try moving them to the LSI Card.

    It could be a data cable issue or controller issue.

     

    If all drives were running into an issue it could be RAM or PSU but given its only two drives i'd look at your SATA Data Cables or Controller.

  5. I really do love the simplicity of Unraid for my home server uses.

     

    My question and request is probably far from what has been posted here.

    But i run an ESX Server in the Data Centre for my critical cloud apps.

    I have been unsucessful trying to get Unraid working on ESX using a USB thumbdrive booting in an VM.

     

    Are there any plans to release a cutdown Unraid Version for VM. I am happy to pay a license for it.

    I simply want a stripped down version of Unraid for Docker and VM Management.

     

    Every other Linux OS has a weird and frankly annoying way to manage Dockers, i really love Unraid's approach to this.

    In my research of trying to get this working there was a fair amount of interest in a VM version of Unraid.

     

    I would love to Beta test Unraid in a VM if you're ever looking for someone to do that.

  6. Can anyone help with Unraid on ESXi 6.7.

    I have no working network driver after booting 6.6.7 of Unraid off a USB.

    i've tried E1000, VMXNET3 was the first i tried neither worked.

    When i boot up Ubuntu it works no issues :(

     

    I can't really flash the USB with an older version now its sitting in a server within the DC.

    Any thoughts?

  7. Next week i am deploying a Server in the DC behind a firewall.

    I am looking at running Unraid within ESXi.

    Looking at the methods to boot i will plug in an Unraid USB into the inbuilt USB Header.

    Booting from PLOP. I can't see any issues with this method it looks fine.

     

    I cannot find any info however on the Data Disk side of things.

    I was planning to create a 1TB VMDK on the Dell Managed Raid Storage within ESXi.

    Assigning this 1TB VMDK to the Unraid VM, will it show up as Disk1 to be assigned to Unraid?

    If i needed to increase the VMDK size later to say 2TB is Unraid going to be ok with that?

     

    I see a lot of people passing through controllers etc, but i am happy to let the Dell Server maintain the 14 Disk Array in Raid 6 on its own.

    I am mostly using Unraid for the Docker Functions and running some of the Dockers i have at home on my Unraid Server also here.

     

    I really love the way Unraid is simply to maintain and upgrade over a linux server, i am just looking at using the Service side of Unraid pretty much in a VM form.

    Either this is going to just work and thats why there is no info, or this isn't something many people are doing at all and it won't work.

    Any help is appreciated.

     

  8. Does anyone know where the log location is for containers i clicked the Container Size section and found my logs are 6.45GB.

    But when i run around the container within a shell i cannot find where these logs are.

    Anyone know where they are kept?

     

    Name Container Writable Log ---------------------------------------------------------------------

    binhex-medusa 1.34 GB 101 MB 6.45 GB

  9. 18 minutes ago, trurl said:

    Mine never all spin unless I am doing a parity check or rebuild so obviously not a good time. If you are using Turbo Write I guess they are likely to be all spinning

    Don't use plex or turn off Maintenance ?

  10. Another option to add to the plugin which will make it even better.

    We have the check every hour on the Cronjob if not at the threshold or over that percent don't run the Mover.

     

    How about an exception, if all Data Disks are spun up already during the cronjob check you might as well run the Mover.

    Since 90% of people running this plugin are to not move unless at X Threshold to stop disks from being spun up, if they are already all spinning why wait to the threshold and force all drives to spin up then.

     

    Just an idea for offloading the data to the parity array when it makes sense too.

     

  11. On 10/28/2018 at 3:07 AM, pingu3000 said:

    I am still amazed how hard you try to defend the current primitive functioning of the cache drive while it could very obviously be way more refined.

     

    If the mover starts moving at 10% capacity left of the cache drive, i could NOT fill the cache drive faster with network uploads (to the server) and internet downloads (on the server) than it is emptying by the data being written to the array. This means the cache drive will always be used to it's (90%) maximum and the array would only be spinned up when necessary. The cache drive will never be completely full and data written to the array will only be done by the mover.

     

    I brought up the network speed argument before and i think since using turbowrite and a new intel network card i actually might be able to write faster to the array over lan than the 400 Mbit/s from before. That point might be mute after testing. If not, it would still be my biggest gripe.

     

    I understand your point about the minimum free setting, it doesn't concern me much though. i transfer files of less than 10-20% capacity of my cache disk.

     

    Before starting with quizzes we should understand each other, meaning you should understand my point. :P

    I get what you are saying here, you could fill the cache drive to 100% while mover is running.

    I have experienced this and if you have unraid setup incorrectly dockers will crash which is probably what you are getting at here.

     

    Under Shares make sure Cache Drive is set to Prefer on any of the Shares you have set to Only.

    If the Cache drive fills too 100% any files that need to be changed will be written to the array, when mover runs next time it will have free space and move those files back to the Cache Drive. Its the same as wanting those files on your Cache Drive only but in the event of 100% full Cache the system has a place to keep operating.

    You have to also Include some drives from your array within the Share where you want this overflow to take place.

    The only thing that is going to happen here is any accesses to that file that is on the array instead of the cache drive will read at array speed.

    Once Mover runs again that file will now be at Cache Drive at Cache speed.

     

    Completely opposite in my opinion the Unraid setup between Cache and Array is amazing.

    If you use the Prefer option on your Share you will run into a bad day, once the Cache Drive nears 100% you will get I/O errors and at somepoint docker crashes or system lockup.

     

    If you are running into some weird problems like Disk Full when the Cache Drive is only at 70% or 80%, that is a BTRFS bug you can run a rebalance on the Cache Drive to clear the space.

    Look at your Cache Drive Used Space then click on the Cache drive on the left under balance you should see "Data, single: total=83.01GiB, used=80.65GiB"

    See if the total is around what your used space is in Unraid.

    You can manually press Balance and it will clean it up for you.

     

    I have a cron job that runs "btrfs balance start -dusage=75 /mnt/cache" every week just to clean it up.

     

    Might be useful actually if this plugin checked the Cache Folder in DF and compared it with the BTRFS Filesystem Database if they don't match run rebalance.

    Maybe another plugin should keep an eye on that, anyway i find the Cron job enough to do the job.

     

     

     

  12. not sure if i missed something here.

    But checking syslog there is no new data there at all saying that it didnt run for some reason.

    But i've tried a few setting changes and each hour mover doesn't start.

     

    Any pointers of something i might have missed or do i need to reboot after installing the plugin?

    mover.thumb.PNG.0078f4d3c769d419ca137b7bda4f439b.PNG

    disk.thumb.PNG.568152de8e08c2613aac1ad8d78580da.PNG

  13. With the plugin disabled the drives spin down as per below, but without the cachedir plugin a Plex scan will spin up all the disks that are part of that share.

    The plugin did stop that from happening by holding the directories in memory during the scan.

    Though disks are spun down without the cachedir its not an ideal situation. :)

     

    In all fairness when i did my plugin upgrade i never rebooted, so i will go do that now.

    But my settings for cachedir are below as well, do they look right.1203137414_plugindisabled..JPG.e88eee9c6dd9b866aa1601657b6373a7.JPGcache.thumb.JPG.130392a147025b0dd18e596187c04189.JPG

     

     

×
×
  • Create New...