Jaybau

Members
  • Posts

    185
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jaybau's Achievements

Explorer

Explorer (4/14)

16

Reputation

2

Community Answers

  1. I had SWAG previously working, but something changed, and now I get an error message about "Permissions could not be set. This is probably because your volume mounts are remote or read-only." chown: cannot dereference '/config/keys/letsencrypt': No such file or directory find: /var/lib/letsencrypt: No such file or directory find: /var/log/letsencrypt: No such file or directory /etc/s6-overlay/s6-rc.d/init-certbot-config/run: line 372: 361 Illegal instruction certbot certonly --non-interactive --renew-by-default chown: cannot dereference '/config/keys/letsencrypt': No such file or directory find: /var/lib/letsencrypt: No such file or directory find: /var/log/letsencrypt: No such file or directory /etc/s6-overlay/s6-rc.d/init-certbot-config/run: line 372: 364 Illegal instruction certbot certonly --non-interactive --renew-by-default using keys found in /config/keys **** Permissions could not be set. This is probably because your volume mounts are remote or read-only. **** **** The app may not work properly and we will not provide support for it. **** Variables set: PUID=99 PGID=100 TZ=America/Los_Angeles URL=<redacted> SUBDOMAINS= EXTRA_DOMAINS= ONLY_SUBDOMAINS=true VALIDATION=http CERTPROVIDER= DNSPLUGIN=duckdns [email protected] STAGING=false **** Permissions could not be set. This is probably because your volume mounts are remote or read-only. **** **** The app may not work properly and we will not provide support for it. **** Using Let's Encrypt as the cert provider SUBDOMAINS entered, processing Sub-domains processed are: <redacted> E-mail address entered: [email protected] http validation is selected Generating new certificate ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container [migrations] started [migrations] 01-nginx-site-confs-default: skipped [migrations] done usermod: no changes ─────────────────────────────────────── ██╗ ███████╗██╗ ██████╗ ██║ ██╔════╝██║██╔═══██╗ ██║ ███████╗██║██║ ██║ ██║ ╚════██║██║██║ ██║ ███████╗███████║██║╚██████╔╝ ╚══════╝╚══════╝╚═╝ ╚═════╝ Brought to you by linuxserver.io ─────────────────────────────────────── To support the app dev(s) visit: Certbot: https://supporters.eff.org/donate/support-work-on-certbot To support LSIO projects visit: https://www.linuxserver.io/donate/ ─────────────────────────────────────── GID/UID ─────────────────────────────────────── User UID: 99 User GID: 100 ─────────────────────────────────────── using keys found in /config/keys **** Permissions could not be set. This is probably because your volume mounts are remote or read-only. **** **** The app may not work properly and we will not provide support for it. **** Variables set: PUID=99 PGID=100 TZ=America/Los_Angeles URL=<redacted> SUBDOMAINS= EXTRA_DOMAINS= ONLY_SUBDOMAINS=true VALIDATION=http CERTPROVIDER= DNSPLUGIN=duckdns [email protected] STAGING=false **** Permissions could not be set. This is probably because your volume mounts are remote or read-only. **** **** The app may not work properly and we will not provide support for it. **** Using Let's Encrypt as the cert provider SUBDOMAINS entered, processing Sub-domains processed are: <redacted> E-mail address entered: [email protected] http validation is selected Generating new certificate ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container Docker Config: docker run -d --name='swag' --net='proxynet' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Tower" -e HOST_CONTAINERNAME="swag" -e 'URL'='<redacte>' -e 'VALIDATION'='http' -e 'SUBDOMAINS'='<redacted>' -e 'CERTPROVIDER'='' -e 'DNSPLUGIN'='duckdns' -e 'PROPAGATION'='' -e 'EMAIL'='[email protected]' -e 'ONLY_SUBDOMAINS'='true' -e 'EXTRA_DOMAINS'='' -e 'STAGING'='false' -e 'PUID'='99' -e 'PGID'='100' -e 'UMASK'='022' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/linuxserver-ls-logo.png' -p '1443:443/tcp' -p '180:80/tcp' -v '/mnt/user/appdata/swag':'/config':'rw' --cap-add=NET_ADMIN 'lscr.io/linuxserver/swag' I also have my router ports forwarding 443 to 1443 and 80 to 180. Ideas of where to look for the problem?
  2. Just went through the process (replacing smaller parity drive with larger parity drive, and then moving the old parity drive to data), and it was confusing and didn't go the way I wanted. I wanted the safest and easiest option. What I have right now is a full/clean parity sync, while the old parity is sitting unassigned. However, my array is online, so the old parity is obsolete. This doesn't sound like the safest, nor the easiest option. I thought I could do a parity copy. I thought Unraid would have something built-in to copy the parity bits from the smaller drive to the larger drive. If parity copy is not possible, then Unraid could do both Parity 1 and Parity 2. Then when Parity 2 is complete, remove Parity 1 and move drive to data; Parity 2 drive now becomes the Parity 1 drive. In the future I hope Unraid has scenario buttons that walk the user through the process for common different scenarios.
  3. Should the FloodUI docker use PUID 99 and PGID 100 for safer security practice?
  4. Finally got the mover to work. I'm using moving tuning plugin. To get this to work, I configured the "Mover Tuning - Share Settings". Mover Tuning Config (global): moverDisabled="no" moverNice="0" moverIO="-c 2 -n 0" threshold="0" age="no" sizef="no" sparsnessf="no" filelistf="yes" filetypesf="yes" filetypesv="*.duplicate,*parts,*.bak,*.qb!,*.tmp, .Recycle.Bin, /mnt/cache/media/.Recycle.Bin" parity="no" enableTurbo="no" logging="yes" force="no" ignoreHidden="no" beforeScript="" afterScript="" omovercfg="no" movenow="yes" testmode="no" filelistv="" Mover Tuning - Share Settings moverOverride="yes" age="no" sizef="no" sparsnessf="no" filelistf="no" filetypesf="no" ignoreHidden="no" omovercfg="no"
  5. I'm only getting the error message when I manually run the mover. # mover Log Level: 1 mover: started Usage: grep [OPTION]... PATTERNS [FILE]... Try 'grep --help' for more information. find: ‘standard output’: Broken pipe find: write error mover: finished I've been writing a lot of data to the "media" share, which first goes to cache, then moves to array. I'm trying to get the data moved off cache and onto array. But the mover won't move it, because of the error. I suppose it is possible I'm trying to move something off the array onto the cache, which is failing the mover. Or something else is trying to move off the cache onto the array, but hitting a minimum floor (haven't found it yet). In a bind right now, because I need to move data off the cache, cache is filled up, but the mover gets an error.
  6. The pool/array's free space is 8.61 TB, and the share floor is 25 GB. What is causing the error message?
  7. When running "Mover" I receive this error message: Tower shfs: share cache full Not sure if the above message is telling me the "share" is full or the "cache" drive is full. I assume the share. Logs do not tell me which share; I assume "media". Log Info: mover: started Usage: grep [OPTION]... PATTERNS [FILE]... find: ‘standard output’: Broken pipe find: write error mover: finished Share Settings (Allocation Method = High-water): Is the mover trying to move only to Disk 1 (because of the "high water" allocation method)? But fails because Disk 1 is below the 25 GB minimum free space? Will mover try to move to Disk 2 or Disk 3, when high water allocation is no longer viable? Should I choose a different allocation method? The reason why the disks are not balanced, despite "high water" allocation, is because I have add disks at different points of time. tower-diagnostics-20240325-0932.zip
  8. Great points... If I was in charge and had the budget, I would have enhanced the unRAID product to include the important features that ZFS offers (integrity, scrub, repair, snapshot), but done with the unRAID brand/mission. And if I didn't have the budget to do it in the unRAID way, the next best thing is SnapRAID. I just read unRAID's about "mission": Besides wanting to centrally store my data, I also want to keep my data for a many decades. Hence why data integrity verification and some way of restoring data in the case of hardware failure or some bit rot event caused by whatever (I have experienced bit loss, but can't explain how/why, and don't really care, I just want mitigation), and done efficiently (money/time). It becomes hard to justify the cost when it exceeds the purchase cost of my media collection, except the cost of time accumulating the media. Which is why the data integrity/recovery has become more important to me. Backups mean doubling my storage costs. Parity allows me to just buy one drive for parity, so the storage ratio is excellent. Parity gives me the risk vs. cost ratio that is desirable for me at the moment. But something else was mentioned...gamers. I'm not a gamer, but I have heard their performance requirements are very demanding. So maybe this is the real reason ZFS is coming...it's for the high performance demanding gamers, with the high performing hardware investment. So ZFS's hardware demands and costs aren't a big deal to the gamers who already have expensive rigs.
  9. I don't think ZFS is aligned with the Unraid product mission. I want to see an unraid scrub, repair, snapshot, parity feature set.
  10. My proposal would be: Optional Unraid parity array. Use SnapRaid on any pool. (it is inherent to Snapraid, so nothing needs to be developed.) Future Unraid: Full independent multi-pool support. SnapRaid like functionality, or better. Or at least provide a Snapraid plugin. Effort is minimal. Union filesystems (pooling) (not just btrfs/zfs/array). There's already several open source projects that already do this. Effort is minimal. I would use a mirror Unraid cache for write operations in between periodic snapshots/parity. This will keep my data safe until parity operation. I would experiment with more optional frequent snapshots on the cache, since I am expecting the write operations to be small. With Snapraid this can be limited to folders, file extensions, etc. With snapshots, I think it would give me versioning too. Limetech / Unraid Mission Creep? I do not think btrfs/zfs is the inline with Unraid's mission. Too much effort that isn't aligned with mission = shaking head...I just don't get it. Unraid is incorporating RAID (zfs) in their next major release !?!?!? I just don't get it at all. But Limetech is spending money on doing it, and I've read Limetech was even reluctant to doing it. So I'm not sure even Limetech thinks they are doing the right thing. I do think something similar to SnapRaid is inline with Unraid's mission. I just don't get why Limetech doesn't go full throttle in this direction instead of integrating ZFS. Unraid would still be UNraid. BTRFS/ZFS multi-disk pools just aren't un-raid. SnapRAID is. But we aren't getting it. I believe the only reason why Unraid users are going ZFS is because Unraid won't give them what they really want (scrubs, repair, snapshots). So I'm curious, if Unraid provided SnaprRAID/unionfs (or similar), would ZFS demand drop?
  11. The problem with BTRFS is the RAID 5/6 bug where you can lose the entire pool. This isn't something I'm willing to risk. I am curious why SnapRAID isn't added to NerdTools and plugged into Unraid. It can be very simple to do. But, I also could be completely wrong, and would like to know why. There might be scenarios I haven't considered. It might be a lot more difficult to manage than I think. I'll be experimenting with ZFS to find out how resource hungry it is. So now I am very curious. It's not like I'm running Unraid on a Raspberry Pi or minipc. I have 3 spare 1TB drives to learn with. I assume my home network will be the speed bottleneck.
  12. I think these are being accomplished by extending Unraid with File Integrity plugin and backup software. This basically becomes scrubs, snapshots, backup. I think what is really being requested is for an integrated, seamless, easy solution. And to do this without the need of doubling the drives with backups/mirror/replication; accomplish via parity algorithm. I prefer the storage/redundancy/restoration/protection efficiency of parity versus 1:1 backup/mirror (50% efficiency). What I don't understand is why Unraid doesn't think this concept would be an evolutionary step. It doesn't seem like the solution is that far away or costly, so I don't understand why its not being developed. I am looking for a seamless suite of tools to EASILY manage scrubs and restoration. If SnapRAID+Unraid was offered, I would probably go that route (all the benefits of Unraid+what I want from ZFS/BTRFS without the disadvantages). Since SnapRAID is not offered, I am preparing for ZFS. My hope is that once I get ZFS configured, scrubs and repairs are just a matter of clicking a button. The price I pay for ZFS is worth not having the cost of a headache with data integrity problems, panics, time, restoring from a backup. I don't think ZFS is ideal, but it is offering the next step towards what I want, until something else better is offered. Maybe someone will create a SnapRAID plugin. Maybe Unraid will develop the next generation of Unraid parity. In the meantime, I'm going to spend more money on buying more drives to get the solution I want.
  13. Upon scrolling down to the bottom of the screen, I see the option to select "Parity is already valid." This ignores the above action, and won't overwrite/erase the parity drive. And just to make sure, after bringing the main array online, I manually forced a parity check. All is good. Thank you. Special note for a future enhancement: I do wish a bad drive outside of the main array (separate pool) didn't bring everything down and affect other pools. Wish the pools were isolated.
  14. I had a hard drive failure on one of the non-array pools (separate pool for backup drives)(checked power, sata cables; used different sata controllers). Because of the drive failure, I couldn't mount the array, nor could I mount any pools. So I reset the hardware configuration, and preserved the drive assignments, and now I can mount the array/pools. But when I go to start the array (and all the pools), I have the message: "All existing data on this device [parity drive] will be OVERWRITTEN when array is Started". Worse, the drive that failed was my backup drive! So now I'm left without both my parity drive and without a backup. Why? What did I do wrong? What should I have done? tower-diagnostics-20240213-1957.zip
  15. How do I remount a non-array pool? I have a non-array pool (btrfs), external hard drive, and accidentally turned the power off to one of the drives in the pool. I turned the power back on, but the pool doesn't appear to be mounted. Notice the drive is "MISSING": # btrfs filesystem show Label: none uuid: bcaa0df3-5b21-44f1-9293-f5f0ecee8ef6 Total devices 1 FS bytes used 5.28TiB devid 1 size 5.46TiB used 5.40TiB path /dev/md2p1 Label: none uuid: 03ba8866-a578-4d09-bddf-aa64ed48afe6 Total devices 2 FS bytes used 1.63TiB devid 1 size 931.51GiB used 919.51GiB path /dev/sdj1 devid 3 size 931.51GiB used 904.51GiB path /dev/sdk1 Label: none uuid: b687a212-2cbf-48f8-9b97-ccc49e39d661 Total devices 2 FS bytes used 347.89GiB devid 1 size 238.47GiB used 238.47GiB path /dev/sdi1 devid 2 size 0 used 0 path /dev/sdb1 MISSING