Jump to content

JasonK

Members
  • Posts

    97
  • Joined

  • Last visited

Posts posted by JasonK

  1. 4 hours ago, itimpi said:

    It would not pause it until the Pause time came up.    This is a deliberate decision as it is easy to manually pause it just after starting it and more often than not if you start it manually you WANT to leave it running until the increments start kicking in. 

    Gotcha.  Thanks again for the info.  After pausing last night, plugin kicked it off at midnight as expected as well as paused it at 6am as expected.

     

    Thanks!

  2. 2 hours ago, itimpi said:

    Not sure what question you are asking?   There can be a delay (up to 15 minutes) before the plugin runs its idling monitor task and notices that a Manual check has been initiated.

     

    if you want it to only run the Manual check between the increment times specified then hit the Pause button and let the plugin take over Resume/Pause activity.

    Thanks for the reply!  After reading your comment of "...up to 15 minutes" I went ahead and started another manual parity check...It never did pause it:

     

    image.png.c52c6916760fddff3c61c63127f4075c.png

     

    I've manually paused it and will see if the plugin kicks it off at midnight and runs til 6am (per it's settings).

     

    Thx

  3. Just installed the plugin.  Changed the settings to below:

     

    image.thumb.png.66e1449a711b54e62990f98e3750be60.png

     

    Started a manual parity check (it's not 10:46 local time) - and the check is running....it's not paused until 00:00 tomorrow morning:

     

    image.thumb.png.753b9b3c1f79c680715dbb139a478b60.png

     

    Did I miss a setting/configuration/something somewhere?

     

    Thanks!

  4. Hey folks.  Have an array with dual parity.  Parity 1 drive died (replacement is on its way).  System was working ok with it missing.  This morning, disk 3 decided to disable out of nowhere:

     

    image.png.d6ab42e000b9c072fa40fcfb9dbfbc71.png

     

    Attached are the diag logs.  I've not rebooted yet as I've got a copy process with a couple of external drives in unassigned devices copying files off so i can preclear one of them).  I'm not well versed in the logs info - anyone have any thoughts?

     

    Thanks!

    mediaserver2-diagnostics-20231202-1040.zip

  5. 14 hours ago, SmartPhoneLover said:

    Mmmm, if you're talking about the APPS tab, it won't, as I didn't make any changes to the template. There are no changes or additions to be incorporated into the Homarr template.

     

    If you're talking about DOCKER tab, and Homarr not displaying any update available, would like to ask something: It just shows Homarr container as 'up-to-date'? Or do you see it as unavailable or broken link?

    Hey there!  Homarr did have a docker update the other day, which was applied.  Performed that update.  Still running 0.12 (per my Homarr page):

     

    image.thumb.png.458151a46599112e989549cd1213e4b9.png

     

    image.png.ddc88fcdc28f604eead73bc167367c58.png

     

    Having just now discovered the "About" entry on the hamburger menu (never really had use to hunt around/look at the hamburger menu lol), seems it has updated to 0.13:

     

    image.png.02e949923e3f3074a1de1982a0d1e206.png

     

    So it appears that the version info at the top left of the window just didn't get updated text-wise.

     

  6. 34 minutes ago, SmartPhoneLover said:

    The update is already available on its repository. If you can't pull it, the problem is on your side.

    565675678678.JPG

    565675678678-1.JPG

    I know it's available on github.  The Homarr docker hasn't been updated to use that - thus my question.  Unraid app store isn't showing an update available for it - I'm using the :latest tag as well...

  7. i just upgraded my cache pool from 2x256gb ssds to 2x512gb ssds

     

    Here's what I did:

     

    1) shut down array

    2) pull 1 256gb drive

    3) replace with 512gb drive

    4) start array

    5) unraid detects missing pool disk.  assign new 512gb in its slot

    6) start array

    7) unraid rebuilds 512gb drive with data from other 256gb drive in pool

    8) once rebuild is done, shut down server

     

    repeat 1-7 for the other drive

     

    no issues, no muss, no fuss.  sure it took a bit longer than some other methods, but i didn't need to mess with anything and it "just works"

    • Like 1
  8. I recently upgraded my cache pool (was 2 256gb ssds upgraded to 2 512gb ssds).

     

    I'm lazy - i shut down, pulled one 256gb drive from the pool, plugged the 512 in its place.  server came up, saw the missing drive...i selected the new 512gb drive for that slot in the pool, told the array to start...unraid rebuilt the 512gb drive from the other 256 that was still there.

     

    when that was all done rebuilding, shut down, did the same with the other 256gb drive, let rebuild, bam, done.

     

    sure it took longer, but i didn't have to fiddle with things.

  9. One thing to keep in mind - ZFS is enabled only for pools, not the main unraid array.

     

    also another thing to keep in mind - due to how ZFS works, it's not as simple to add/remove drives from the pool as it is from the main unraid array.

     

    lastly, zfs, when using drives of various sizes, will base the array/pool off of the SMALLEST disk in the pool...i.e. if you have a 2tb, 4tb, and 8tb drive in a zfs pool (say, using z1), you will have 4 tb of storage in that pool (2tb drive, 2tb from 4tb drive, 2tb from 8tb drive, with one of those 2tb chunks counting towards parity).

     

    zfs does have it's plus sides - bit rot protection built in, etc...but since OP is "new" to zfs, i would HIGHLY recommend not messing with it on unraid until they have a better understanding.....create a truenas VM with some virtual disks that can be setup in a virtual ZFS environment to mess with or something first :)

  10. 6 hours ago, dlandon said:

    That's generally because there aren't enough inotify watches.  Look at the syslog and there should be a message saying that.

     

    If that is the case, install the Tips and Tweaks plugin and set the inotify watches so you have enough.  A little trial and error will get you there.

    Hmm - Is it the "Max User Watches" setting?  If so, this is what I have currently: image.thumb.png.03d9323d9c33242020c4662319900c4a.png

  11. Uninstalled the old, installed Squid's fixed version.  Went to recreate my folders, created one:

     

    image.png.e56ac24201122f280bf78a399c4a6fb2.png

     

    image.thumb.png.26caa1a7d5e4f98562d8385c70e9edf7.png

     

    I no longer see the apps in the folder to the right in the main docker list.  Clicking the arrow to expand and I see the dockers in the folder.

     

    What am I missing?

  12. 1 hour ago, Mihai said:

    @JasonK I had the same problem and all you have to do is go to Elasticsearch-ES and modify the image to use 7.17.1 instead of 7.14.1, and then restart main TubeArchivist container. It will then take a few minutes for the migrations to run and so on, but it should work.


    image.thumb.png.1dcb8e41e9f148f4f089daec246ad446.png

    Thanks!  That fixed it for me :)

     

    J

  13. Can't seem to get to the web interface.  I have all 3 components installed (all fresh installs earlier today 3/27/22). 

     

    image.thumb.png.0aff600ad1d7cb06c58ecb8bfc5e1606.png

     

    Starting up TubeArchivist and watching the log, here's what I see:

     

    image.thumb.png.99b476049dac7affbcfc45d7be3f10a0.png

     

    {
    "name" : "836da9956d52",
    "cluster_name" : "docker-cluster",
    "cluster_uuid" : "i9vUPW4tTtCDEv1hRtg5KA",
    "version" : {
    
    "number" : "7.14.1",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "66b55ebfa59c92c15db3f69a335d500018b3331e",
    "build_date" : "2021-08-26T09:01:05.390870785Z",
    "build_snapshot" : false,
    "lucene_version" : "8.9.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
    },
    "tagline" : "You Know, for Search"
    }
    run startup checks
    minial required elasticsearch version: 7.17, please update to recommended version.
    
    run startup checks
    minial required elasticsearch version: 7.17, please update to recommended version.
    
    run startup checks
    minial required elasticsearch version: 7.17, please update to recommended version.
    
    run startup checks
    minial required elasticsearch version: 7.17, please update to recommended version.
    
    run startup checks
    minial required elasticsearch version: 7.17, please update to recommended version.
    
    [uWSGI] getting INI configuration from uwsgi.ini
    *** Starting uWSGI 2.0.20 (64bit) on [Sun Mar 27 14:37:32 2022] ***
    compiled with version: 10.2.1 20210110 on 26 March 2022 12:58:35
    
    os: Linux-5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021
    nodename: 8f7ded6e4c5b
    machine: x86_64
    clock source: unix
    detected number of CPU cores: 12
    current working directory: /app
    writing pidfile to /tmp/project-master.pid
    detected binary path: /usr/local/bin/uwsgi
    !!! no internal routing support, rebuild with pcre support !!!
    uWSGI running as root, you can use --uid/--gid/--chroot options
    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
    
    your processes number limit is 256696
    your memory page size is 4096 bytes
    detected max file descriptor number: 40960
    lock engine: pthread robust mutexes
    thunder lock: disabled (you can enable it with --thunder-lock)
    uwsgi socket 0 bound to TCP address :8080 fd 3
    uWSGI running as root, you can use --uid/--gid/--chroot options
    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
    
    Python version: 3.10.3 (main, Mar 17 2022, 05:23:29) [GCC 10.2.1 20210110]
    
    *** Python threads support is disabled. You can enable it with --enable-threads ***
    Python main interpreter initialized at 0x558333ab52c0
    uWSGI running as root, you can use --uid/--gid/--chroot options
    *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
    
    your server socket listen backlog is limited to 100 connections
    your mercy for graceful operations on workers is 60 seconds
    mapped 154000 bytes (150 KB) for 1 cores
    *** Operational MODE: single process ***
    run startup checks
    minial required elasticsearch version: 7.17, please update to recommended version.
    
    VACUUM: pidfile removed.

     

    Any thoughts?  When I try going to http://192.168.0.19:8000 I get:

     

    image.png.198af5d22b8e48ee894f0697349969bc.png

     

    Latest version of Chrome.  No port conflicts for port 8000

  14. Quote

    For a disk to be considered a preclear candidate, it must have one of the following attributes:

    An Unassigned disk with no partitions.

    An Unassigned disk with a preclear signature and a partition with no file system.

     

    IMO - it would have been better to allow any unassigned device, and give redundant warnings about preclearing drives with partitions.

     

    Now it's a pain to have to jump thru some hoops in order to preclear a new EasyStore external, because they come with partitions on them and such.

     

    For those who see this reply later looking to do the same thing, you can clear the partitions and such from the command line.

     

    *** DOUBLE CHECK YOUR DEVICE ASSIGNMENT BEFORE DOING THE BELOW: ***

     

    I cleared the filesystem (my easystore was on /dev/sda), by doing:

     

    wipefs -a /dev/sda1 (replace sda with whatever device assignment your drive is showing)
    
    Once it shows successful,
    
    wipefs -a /dev/sda (to remove the partitions)
    

     

    The UA preclear will then see the device as available for preclearing

     

    I'm not responsible if you wipe the FS/partitions from a drive in your array because you didn't double/triple check ;)

  15. On 2/23/2022 at 2:33 PM, Soldius said:

    Hi, I am trying to install FFMPEG inside the OS using the terminal but I cannot login using the password "Docker!" Is there another password that I can use for the sudo command? Thank you

    In the docker info it specifically states this is a known issue and needs to be fixed.

  16. The bridge mapping isn't coming across inside of Debian when VNCed in to it (i.e. start tartube, go to select the destination, 'bridge' (which is mapped in the docker settings to point to a valid location on the array), isn't showing, anywhere)

     

    unraid_mnt shows that its pointing to the right place, but when I go to it in the docker, nothing is there (and the folder has a temp file I put there to confirm visibility)

  17. On 8/26/2021 at 7:48 PM, Agent Crimson said:

    I am a reletively new unRaid user with a small server. Originally unRaid was a hard sell on me for the sole reason it didn't have zfs. I love how user friendly it is and applications like Nextcloud and Plex were simply a few clicks. I am also a heavy Proxmox user and due to the how unRaid's parity works I got fed up and was about to leave unRaid. If ZFS is added it would be the best thing I would hands down become a unRaid power user too. There are a lot of things I just prefer about Proxmox which is not a discussion for here but I definitely won't be biased and will be running a hybrid setup thanks to zfs

    unraids parity works, for the most part, like any other parity system.

     

    you can just use freenas if you want zfs :)

×
×
  • Create New...