OneMeanRabbit

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by OneMeanRabbit

  1. Jorge, where are the identified "bugs" of 6.12 documented and/or the proposed changelogs for 6.12.1? I'd love to see if others are having issues that I may wish to stay on 6.11.5 if they match my setup. Thank you!!
  2. Same, really kicking myself in the ass for trying to move my perfectly working server to RC5 because I was so stoked about ZFS! haha, it was running perfectly until then...Rolled it back, and it's been nothing but highly unstable. Tried restoring from the backup i made before my upgrade & reboot to a new flash drive, and now I'm stuck with a GUID error. Good luck, will be watching this to see if anyone has any solutions. Since my dockers and VMs are completely broken, I'm even down to creating clean install and just importing my ZFS pools if that works. I have backups of my docker templates, so probably will be my next step if nothing comes of this thread.
  3. Followed this Changing the Flash Device post, works perfectly until "replace key". Popup comes on, and I clicked Acknowledge and then confirm. Doesn't make sense, since my old Sandisk drive in on my desk in front of me - and only my new one is plugged in. Is it too late to attempt to restore the backup to my older Sandisk drive, since this change is obviously not working? I just used my original flash drive and restored the same backup to it - no problem. I'll wait for Customer Support to reply back to my email - thanks.
  4. same issue, and it still exists even after restoring 6.11 backup. I SEE the shares from windows, but nothing will allow me to access it. NFS is also down. 2023-05-12 12:36:48 Error tower daemon winbindd initialize_winbindd_cache: clearing cache and re-creating with version number 2 2023-05-12 12:36:48 Error tower daemon winbindd [2023/05/12 12:36:48.977247, 0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache) 2023-05-12 12:36:48 Error tower daemon winbindd Copyright Andrew Tridgell and the Samba Team 1992-2022 2023-05-12 12:36:48 Error tower daemon winbindd winbindd version 4.17.3 started. 2023-05-12 12:36:48 Error tower daemon winbindd [2023/05/12 12:36:48.973483, 0] ../../source3/winbindd/winbindd.c:1440(main) 2023-05-12 12:36:48 Notice tower user root /usr/sbin/winbindd -D 2023-05-12 12:36:48 Error tower daemon smbd Copyright Andrew Tridgell and the Samba Team 1992-2022 2023-05-12 12:36:48 Error tower daemon smbd smbd version 4.17.3 started. 2023-05-12 12:36:48 Error tower daemon smbd [2023/05/12 12:36:48.956342, 0] ../../source3/smbd/server.c:1741(main) 2023-05-12 12:36:48 Notice tower user root Starting Samba: /usr/sbin/smbd -D
  5. Having this issue (smb all of a sudden not working) as well as constant hard locks, since upgrading to 12rc5, rolling back - even after restoring from backup before everything...will post my own thread once I power cycle and turn on syslog server
  6. Prior to upgrade and reboot, I did export my pools. After upgrade and reboot and before array start, zpool status = no pools exist. Now that I'm where I'm at, what should I attempt, if I want to include it in my unRAID pools. Would love your thoughts on my questions, or any other people's. Thank you kindly for taking your time helping!
  7. To clarify, I'm on RC5 now - but before starting array, I zpool import -a which brought in my hdd & nvme pools. The status are good and shares NFS/SMB work great. Then I created a pool in the UI called "hdd" which it let me, and I added the 8 drives with raidz & 2 groups of 4 devices. The pool shows up, but still has the "Unmountable:" error. However, everything seems to work fine, except that I had to increase permissions for some reason on my appdata/docker folder for a few containers - adminer & nextcloud.
  8. tower-diagnostics-20230510-1309.zip Forgot to attach this on original, thank you and sorry for having to waste time requesting! Started looking, I can see mention of my nvme.cfg under shares, but want to remove it completely so I can eventually reuse it when figure this out. That would be a quick win for me, thank you!
  9. I tried it initially following the paragraph from the RC1 & RC3 blog posts. That didn't work initially, because I seem to have some ghost user shares in a config file (not showing in UI Shares) with the same names as my zfs pools - nvme & hdd. I created new pools as above, and had to choose new names. After adding the new pools of 8 drives (2 raidz1 vdevs) & 1 nvme (raidz0), it changed my pool names. It couldn't find any data and it had the "Unmountable: Unsupported or no file system" error. I kinda freaked, and rolled back to 11. Still couldn't find my data, and then tried zpool import -a and it worked. To fix my existing shares/docker mappings, I zpool export <new pool names> zpool import <new pool names> <original pool names> Fine, back to "normal" except system was acting funny...ZFS pools wouldn't auto import at boot and weird docker behavior. Today I tried again, and just simply zpool import <original pool names>, and while it worked initially - I can't get them properly into a new pool. Still shows "Unmountable: Unsupported or no file system" even though everything else works fine. Questions: Any value to using my zfs pools as unRAID pools vs just keeping them "segregated"? If there is value, is there anything in my configuration that precludes importing them in as is in the current release? How should I set them up with "cache" settings when I don't really require that type of backend mgmt? System runs fabulously as is, with 2 flash drives as my "fake array", and 2 SSDs as my only pool to host my "appdata" - mainly docker folders, as i was having some crazy pain around how ZFS was creating infinite snapshots of their images.
  10. I used the gui, and it wouldn't let me use the existing pool names. So I used new ones - but then it changed their pool names. Just I just imported them after downgrading, and then exported <old name> and imported <new name>. Do we have clear instructions on migrating? It seems like it would have been fine to stay on 12, and just do the same thing... It just wasn't what was documented.
  11. I'm gonna need a proper writeup with pictures before I try that again. haha! It wouldn't let me reuse my pool names because it thinks I still have them as user shares. I deleted them forever ago and just use sharenfs locally, anyone have any idea where I can completely remove mentions of old user shares for next time I try this upgrade out? Thoughts on why it may have caused issues...I still had my drives "passed through" so I didn't accidentally mount them via Unassigned Disks. ****ZFS update scare - Rolled-back & zpool import -a + export/import to revert back pool names and set mountpoint back to /zfs/<pool>**** I really hope I didn't bork up my perfectly running 11.5 system... I followed these exact steps - "Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report)." Drives are showing empty, and neither original pools can be imported. It even renamed my 1st partition the new pool names (since I couldn't reuse my existing names - nvme & hdd because unraid thinks I have user shares named that still). I'm trying to roll back now, but OMG...haha! I didn't see any pop-up like above warning of formatting, so hoping it's just a "fun" scare. Should I have tried importing it via zpool import???
  12. Hello, not sure what happened on my end - but I haven't been able to successfully download the updated driver. I've been stuck on 520.56.06 for a bit. I keep the window open so it keeps "Trying to redownload the Nvidia Driver v530.41.03", and I can keep it open for days - no dice. Any way to manually download that driver and update it via cli, vs this plugin? To be clear, all of my nvidia dockers & vms work great - love that I can do several streams and concurrently use on multiple containers/vms! Thank you. **Edit** tried to just delete it from /config/plugins/nvidia-driver/packages/5.19.17 and have it re-download again. It quickly downloaded the same file - but still showing stuck on the same "Downloading...This could take some time..." window
  13. I found the options and syntax needed toake sharenfs work great! Tried replicating it via gui and/or go file, but they aer frustrating. Also added nfs to windows 11 pro, couldn't be happier.
  14. same issue...can't figure out what's broken.
  15. @HoopsterThank you for the quick reply. I haven't made a backup in sometime, but will try this this evening. Anyone else here with an AMD Ryzen 5 CPU - does your normal unRAID boot option show that option? Most things I'm seeing online says it should say amd_iommu=on iommu=pt or that it's unnecessary. Worst case, is it possible to download 6.10.3 and "start over"? I can save my docker templates, and my config files are on a separate zfs nvme pool. Probably just need to redo my cache ssd's which hold my docker subsystem (not using .img, but folders on the SSDs). Would that work as well? Thanks again to @Hoopster for advice! Used an old backup and looked up syslinux.cfg - didn't have the offending amd_iommu option. Deleted it from current cfg and voila! Not sure how it got there, but great lesson in using backups to validate changes and make them more often...
  16. Captured the error which flashed very quickly: Loading /bzroot: ...ok Loading amd_iommu=pt...failed: No such file or directory Didn't change any other unRAID options or BIOs settings, besides downloading r4 update. Storms must have knocked out power long enough to cause UPS to kickoff shutdown. My BIOs has SV-IOV enabled, and I have diagnostics files 6 days ago after issue showed up and today. Things I tried: Rollback to 6.11.0.r3. Ran Memtest - 8 runs, no errors syslog in safemode keeps showing "Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4096 bytes) in /usr/local/emhttp/plugins/dynamix/include/Syslog.php on line 18" Renamed plugins & rebooted. Removed all plugins & rebooted. Manually edit out ohptions under regular boot option - PCIE passtrough & amd_iommu=pt, leaving only bzimage & bzroot Booted in safemode: reinstalled zfs plugin & others, zfs pools (all data) is fine and accessible after installation. SMART errors on 2 mirrored cache drives (reallocated sector counts 24) and several on my spinning rust (mirrored vdevs - still showing healthy)... UDMA CRC error count 2 Reallocated sector count 568 offline uncorrectable Thinking maybe reset CMOS or replace CMOS battery? Happy to post entire .zip or specific ones if this helps.
  17. Thank you Marshalleq! I tired rolling back to 6.11.0.r3 and reboot, no bueno. I'll retry this again just in case, and come back to see how to get the updated plg installled. I did wget the link in first post, it downloaded it but not sure if that's the correct procedure.
  18. Downloaded 6.11.0.r3 update before my trip (from 6.11.0.r2), but didn't reboot...in HI now, and a storm back home knocked out power long enough for server to intiate a shutdown. It updated, but won't boot unless in safe mode. Things I've tried - renames some/most plugins + reboot normally. Removed all settings for regular boot except normal (and immou and pcie stuff) + reboot. Seems it's a plugin, and the only one I haven't renamed is this one. Question, can I boot into safe mode and reinstall unRAID6-ZFS.plg to check my pools? I've done ample testing prior, feel confident data is good - just trying to troubleshoot while away to get at least zfs plugin up and running again. Good thing I setup wireguard on firewall and have access to IPMI...but damn good lesson I need some basic HA on critical apps like bitwarden...phone rebooted, and I can't access my phone's local DB. Luckily, my wife's iPhone either is different than android or hasn't rebooted - but what a PITA. Haha. I have diagnostics and am going through them, but they were run after I discovered the issue...
  19. This is great stuff, thank you! I have 128GB of RAM and monitor/control it pretty aggressively (with docker parameters) due to doing so much in RAM (not to mention ZFS... :D) Haven't crashed yet, but very good to know. That parameter --mount type=tmpfs,destination=/tmp,tmpfs-size=8589934592 is exactly what I am looking for! Very easy to calculate & limit exactly what you want!! I'd be happy to create an outline and propose best answers w/links, for some kind of a pinned Best Practices or FAQ. For new users, that would help drastically vs. reading through 15 pages + additional topics like the one you posted! Not to mention lessen the amount of repeated answers you give! Thank you for creating, updating, and SUPPORTING this (and many other) dockers for unRAID!
  20. I didn't create a RAMdisk, just /tmp. Makes sense re: /cache, I'll move that to my nvme where the docker config is. Re: /transcodes : /tmp, while it worked great for a day, it started throwing odd errors related to my frigate transcodes also being mounted to /tmp - specifically /tmp/frigate. [2022-04-13 15:21:26.763 -05:00] [ERR] [179] Jellyfin.Server.Middleware.ExceptionMiddleware: Error processing request. URL "POST" "/Sessions/Playing". System.UnauthorizedAccessException: Access to the path '/transcodes/frigate/cam-livingroom-20220413152052.mp4' is denied. ---> System.IO.IOException: Permission denied The only way this jellyfin docker doesn't look for other transcoding videos in /tmp is to put it in a separate directory, ie. /tmp/transcodes. Any ideas on how to keep it from looking in other directories under /tmp if it wants to transcode?
  21. @ich777Thank you for the /tmp recommendation, I'm trying that now. I was trying to look to see if /tmp/jellyfin & /tmp/jellyfin/transcode were found, and if not, create them with proper permissions. But, I see that eveything in /tmp is supposed to be volatile, so directory structure isn't needed. Hoping this works, I'll report back - thank you! EDIT - Works like a charm without the need to babysit directories...sigh. Thank you again. I don't really use unRAID Cache or Arrays, is it "ok" to mount /cache to /tmp as well, or should it go to an ssd or nvme?
  22. Odd issue, trying to troubleshoot via rebooting & solve with mix of go/User Scripts plugin. I noticed streaming kept stopping "randomly" and confirmed that after reboot, my docker volumes for cache & transcode - /tmp/jellyfin/ - would get set back to root:root permissions and/or disappear completely (/tmp/jellyfin/transcode). I've searched and no one else seems to have this issue, so it must be user error. What is best practice for leveraging RAM for this docker image? Owner - root:root vs nobody:users Permissions - if root:root, 777 Finally, can someone explain differences between editing go vs User Scripts? Seems like go executes at prior to User Scripts, which helps with things like ZFS, etc.
  23. I'm getting a weird situation with node-red specifically (tried multiple images - same issue). When it pulls the image and tries to copy /data into my persistent mapping - it fails. It specifically is creating folders/files with: sonny:ssh-allow permissions, but I cannot figure out how to solve it....This is my exact issue! And here - forces 1000 or 1001 UID which makes sense because that matches the user (not sure where "ssh-access" comes from) my /etc/passwd file. Solution attempt #1 - kinda fixes it - Tried adding "--user nobody" to Extra Parameters, and it at least installs and runs immediately..but with sonny:1000 vs. sonny:ssh-access... Every other docker I have doesn't do this and most use nobody:users - especially when I set PGID/PUID/UMASK variables. I tried doing manually adding those specific variables, but no dice. I've googled everywhere, and it *maybe* a recent sshd_config change since 6.9? Anyway I can force the docker to run with specific permissions nobody:user?
  24. @ich777Thank you so much for this, and the great support you give! I've searched google, 20+ pages here - and can't find where I'm going wrong... The permissions change from the default of drwxrwxr-x to drwxrwx--- after I force update/docker starts, not sure if that's normal. I've disabled SteamGuard completely to ensure that wasn't causing the issue. I've logged into the console, and can type 'login' and it asks for user & password, but no combo works. Not really understanding this portion of the Project Zomboid's instructions. I must be missing a setup step, but I can't find it - I'll gladly accept an annoyed link if anyone can provide guidance! ------------------------------ This Docker will download and install SteamCMD. It will also install Project Zomboid and run it. Servername: 'Docker ProjectZomboid' Password: 'Docker' AdminPassword: 'adminDocker' ATTENTION: First Startup can take very long since it downloads the gameserver files! CONSOLE: To connect to the console open up a terminal and type in: 'docker exec -u steam -ti NAMEOFYOURCONTAINER screen -xS PZ' (without quotes), to disconnect from the console simply close the window. --------------------------------- Here's the errors the log is showing - Waiting for client config...OK Waiting for user info...OK [33;1mPlease use force_install_dir before logon! [0m Update state (0x3) reconfiguring, progress: 0.00 (0 / 0) Update state (0x81) verifying update, progress: 64.79 (1868125046 / 2883304296) Update state (0x81) verifying update, progress: 90.86 (2619888380 / 2883304296) Error! App '380870' state is 0x602 after update job. ---Prepare Server--- ---Setting up Environment--- ---Looking for server configuration file--- ---No server configruation found, downloading template--- ---Sucessfully downloaded server configuration file--- Archive: /serverdata/serverfiles/cfg.zip creating: Zomboid/ creating: Zomboid/db/ inflating: Zomboid/db/servertest.db inflating: Zomboid/options.ini creating: Zomboid/Server/ inflating: Zomboid/Server/servertest.ini inflating: Zomboid/Server/servertest_SandboxVars.lua ---Checking for old logs--- ---Server ready--- ---Start Server--- Cannot exec '/serverdata/serverfiles/ProjectZomboid64': Permission denied
  25. I messed up my SSL and had to use a backup flash image. 1. Got everything back, but my Main page doesn't show the FS as zfs anymore. Everything else works fine, but not sure if I need to perform another step (import zpool) or what. 2. I want to ensure I have my system setup right as far as mounting goes. When I setup zfs originally, I mounted the pools to /mnt/nvme & /mnt/data. I also see them mounted at /mnt/disk1/zpool/data & /mnt/disk1/zpool/nvme plus /mnt/user/zpool. Is this "ok" and/or "optimal", as I've seen alot of new practices since setting up originally ~1 year ago. Thank you for any help!