JasonK

Members
  • Posts

    97
  • Joined

  • Last visited

Everything posted by JasonK

  1. Gotcha. Thanks again for the info. After pausing last night, plugin kicked it off at midnight as expected as well as paused it at 6am as expected. Thanks!
  2. Thanks for the reply! After reading your comment of "...up to 15 minutes" I went ahead and started another manual parity check...It never did pause it: I've manually paused it and will see if the plugin kicks it off at midnight and runs til 6am (per it's settings). Thx
  3. Just installed the plugin. Changed the settings to below: Started a manual parity check (it's not 10:46 local time) - and the check is running....it's not paused until 00:00 tomorrow morning: Did I miss a setting/configuration/something somewhere? Thanks!
  4. Hey folks. Have an array with dual parity. Parity 1 drive died (replacement is on its way). System was working ok with it missing. This morning, disk 3 decided to disable out of nowhere: Attached are the diag logs. I've not rebooted yet as I've got a copy process with a couple of external drives in unassigned devices copying files off so i can preclear one of them). I'm not well versed in the logs info - anyone have any thoughts? Thanks! mediaserver2-diagnostics-20231202-1040.zip
  5. Hey there! Homarr did have a docker update the other day, which was applied. Performed that update. Still running 0.12 (per my Homarr page): Having just now discovered the "About" entry on the hamburger menu (never really had use to hunt around/look at the hamburger menu lol), seems it has updated to 0.13: So it appears that the version info at the top left of the window just didn't get updated text-wise.
  6. I know it's available on github. The Homarr docker hasn't been updated to use that - thus my question. Unraid app store isn't showing an update available for it - I'm using the :latest tag as well...
  7. Any idea if/when the docker will be updated to the 0.13 version of Homarr thats on github?
  8. i just upgraded my cache pool from 2x256gb ssds to 2x512gb ssds Here's what I did: 1) shut down array 2) pull 1 256gb drive 3) replace with 512gb drive 4) start array 5) unraid detects missing pool disk. assign new 512gb in its slot 6) start array 7) unraid rebuilds 512gb drive with data from other 256gb drive in pool once rebuild is done, shut down server repeat 1-7 for the other drive no issues, no muss, no fuss. sure it took a bit longer than some other methods, but i didn't need to mess with anything and it "just works"
  9. I recently upgraded my cache pool (was 2 256gb ssds upgraded to 2 512gb ssds). I'm lazy - i shut down, pulled one 256gb drive from the pool, plugged the 512 in its place. server came up, saw the missing drive...i selected the new 512gb drive for that slot in the pool, told the array to start...unraid rebuilt the 512gb drive from the other 256 that was still there. when that was all done rebuilding, shut down, did the same with the other 256gb drive, let rebuild, bam, done. sure it took longer, but i didn't have to fiddle with things.
  10. One thing to keep in mind - ZFS is enabled only for pools, not the main unraid array. also another thing to keep in mind - due to how ZFS works, it's not as simple to add/remove drives from the pool as it is from the main unraid array. lastly, zfs, when using drives of various sizes, will base the array/pool off of the SMALLEST disk in the pool...i.e. if you have a 2tb, 4tb, and 8tb drive in a zfs pool (say, using z1), you will have 4 tb of storage in that pool (2tb drive, 2tb from 4tb drive, 2tb from 8tb drive, with one of those 2tb chunks counting towards parity). zfs does have it's plus sides - bit rot protection built in, etc...but since OP is "new" to zfs, i would HIGHLY recommend not messing with it on unraid until they have a better understanding.....create a truenas VM with some virtual disks that can be setup in a virtual ZFS environment to mess with or something first :)
  11. This workaround just worked without issue for me when connecting 2 external WD hard drives (that showed the same ID) to a windows 10 VM. Both drives show in explorer and can be worked with without issue. Thanks!
  12. Yup - that is the setting. Changed it to 1024000 and after a bit FIle Activity was seeing files. Thanks!
  13. Hmm - Is it the "Max User Watches" setting? If so, this is what I have currently:
  14. Installed the plugin. Set it to monitor everything but cache...and to display activity by disk. Saved settings. Tell it to start monitoring...shows that its running. After about 10 secs, I do a refresh, and it shows as Stopped. If I start it again, same thing happens - after about 10 secs, it stops. Any ideas?
  15. FYI - as of around 6am ET or so, Rogers Communications in Canada is having a major outage - internet and cell - so this can cause some problems with Canadian endpoints
  16. Mine has the duplicate host port 4 (I just updated to 6.10.3 yesterday - didn't have this issue before then). Removed port 4 and will see how it does.
  17. Uninstalled the old, installed Squid's fixed version. Went to recreate my folders, created one: I no longer see the apps in the folder to the right in the main docker list. Clicking the arrow to expand and I see the dockers in the folder. What am I missing?
  18. Can't seem to get to the web interface. I have all 3 components installed (all fresh installs earlier today 3/27/22). Starting up TubeArchivist and watching the log, here's what I see: { "name" : "836da9956d52", "cluster_name" : "docker-cluster", "cluster_uuid" : "i9vUPW4tTtCDEv1hRtg5KA", "version" : { "number" : "7.14.1", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "66b55ebfa59c92c15db3f69a335d500018b3331e", "build_date" : "2021-08-26T09:01:05.390870785Z", "build_snapshot" : false, "lucene_version" : "8.9.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } run startup checks minial required elasticsearch version: 7.17, please update to recommended version. run startup checks minial required elasticsearch version: 7.17, please update to recommended version. run startup checks minial required elasticsearch version: 7.17, please update to recommended version. run startup checks minial required elasticsearch version: 7.17, please update to recommended version. run startup checks minial required elasticsearch version: 7.17, please update to recommended version. [uWSGI] getting INI configuration from uwsgi.ini *** Starting uWSGI 2.0.20 (64bit) on [Sun Mar 27 14:37:32 2022] *** compiled with version: 10.2.1 20210110 on 26 March 2022 12:58:35 os: Linux-5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021 nodename: 8f7ded6e4c5b machine: x86_64 clock source: unix detected number of CPU cores: 12 current working directory: /app writing pidfile to /tmp/project-master.pid detected binary path: /usr/local/bin/uwsgi !!! no internal routing support, rebuild with pcre support !!! uWSGI running as root, you can use --uid/--gid/--chroot options *** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** your processes number limit is 256696 your memory page size is 4096 bytes detected max file descriptor number: 40960 lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) uwsgi socket 0 bound to TCP address :8080 fd 3 uWSGI running as root, you can use --uid/--gid/--chroot options *** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** Python version: 3.10.3 (main, Mar 17 2022, 05:23:29) [GCC 10.2.1 20210110] *** Python threads support is disabled. You can enable it with --enable-threads *** Python main interpreter initialized at 0x558333ab52c0 uWSGI running as root, you can use --uid/--gid/--chroot options *** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds mapped 154000 bytes (150 KB) for 1 cores *** Operational MODE: single process *** run startup checks minial required elasticsearch version: 7.17, please update to recommended version. VACUUM: pidfile removed. Any thoughts? When I try going to http://192.168.0.19:8000 I get: Latest version of Chrome. No port conflicts for port 8000
  19. IMO - it would have been better to allow any unassigned device, and give redundant warnings about preclearing drives with partitions. Now it's a pain to have to jump thru some hoops in order to preclear a new EasyStore external, because they come with partitions on them and such. For those who see this reply later looking to do the same thing, you can clear the partitions and such from the command line. *** DOUBLE CHECK YOUR DEVICE ASSIGNMENT BEFORE DOING THE BELOW: *** I cleared the filesystem (my easystore was on /dev/sda), by doing: wipefs -a /dev/sda1 (replace sda with whatever device assignment your drive is showing) Once it shows successful, wipefs -a /dev/sda (to remove the partitions) The UA preclear will then see the device as available for preclearing I'm not responsible if you wipe the FS/partitions from a drive in your array because you didn't double/triple check
  20. In the docker info it specifically states this is a known issue and needs to be fixed.
  21. The bridge mapping isn't coming across inside of Debian when VNCed in to it (i.e. start tartube, go to select the destination, 'bridge' (which is mapped in the docker settings to point to a valid location on the array), isn't showing, anywhere) unraid_mnt shows that its pointing to the right place, but when I go to it in the docker, nothing is there (and the folder has a temp file I put there to confirm visibility)
  22. I'm starting to get ticked off at PIA and their ongoing off-on DNS issues. Worked great for a long time, these past few weeks have been a pain. Anyone recommend a decent, cheap, VPN provider that works with Binhex's *vpn containers? Looking to switch!
  23. unraids parity works, for the most part, like any other parity system. you can just use freenas if you want zfs