OneMeanRabbit

Members
  • Posts

    39
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

OneMeanRabbit's Achievements

Rookie

Rookie (2/14)

3

Reputation

  1. Jorge, where are the identified "bugs" of 6.12 documented and/or the proposed changelogs for 6.12.1? I'd love to see if others are having issues that I may wish to stay on 6.11.5 if they match my setup. Thank you!!
  2. Same, really kicking myself in the ass for trying to move my perfectly working server to RC5 because I was so stoked about ZFS! haha, it was running perfectly until then...Rolled it back, and it's been nothing but highly unstable. Tried restoring from the backup i made before my upgrade & reboot to a new flash drive, and now I'm stuck with a GUID error. Good luck, will be watching this to see if anyone has any solutions. Since my dockers and VMs are completely broken, I'm even down to creating clean install and just importing my ZFS pools if that works. I have backups of my docker templates, so probably will be my next step if nothing comes of this thread.
  3. Followed this Changing the Flash Device post, works perfectly until "replace key". Popup comes on, and I clicked Acknowledge and then confirm. Doesn't make sense, since my old Sandisk drive in on my desk in front of me - and only my new one is plugged in. Is it too late to attempt to restore the backup to my older Sandisk drive, since this change is obviously not working? I just used my original flash drive and restored the same backup to it - no problem. I'll wait for Customer Support to reply back to my email - thanks.
  4. same issue, and it still exists even after restoring 6.11 backup. I SEE the shares from windows, but nothing will allow me to access it. NFS is also down. 2023-05-12 12:36:48 Error tower daemon winbindd initialize_winbindd_cache: clearing cache and re-creating with version number 2 2023-05-12 12:36:48 Error tower daemon winbindd [2023/05/12 12:36:48.977247, 0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache) 2023-05-12 12:36:48 Error tower daemon winbindd Copyright Andrew Tridgell and the Samba Team 1992-2022 2023-05-12 12:36:48 Error tower daemon winbindd winbindd version 4.17.3 started. 2023-05-12 12:36:48 Error tower daemon winbindd [2023/05/12 12:36:48.973483, 0] ../../source3/winbindd/winbindd.c:1440(main) 2023-05-12 12:36:48 Notice tower user root /usr/sbin/winbindd -D 2023-05-12 12:36:48 Error tower daemon smbd Copyright Andrew Tridgell and the Samba Team 1992-2022 2023-05-12 12:36:48 Error tower daemon smbd smbd version 4.17.3 started. 2023-05-12 12:36:48 Error tower daemon smbd [2023/05/12 12:36:48.956342, 0] ../../source3/smbd/server.c:1741(main) 2023-05-12 12:36:48 Notice tower user root Starting Samba: /usr/sbin/smbd -D
  5. Having this issue (smb all of a sudden not working) as well as constant hard locks, since upgrading to 12rc5, rolling back - even after restoring from backup before everything...will post my own thread once I power cycle and turn on syslog server
  6. Prior to upgrade and reboot, I did export my pools. After upgrade and reboot and before array start, zpool status = no pools exist. Now that I'm where I'm at, what should I attempt, if I want to include it in my unRAID pools. Would love your thoughts on my questions, or any other people's. Thank you kindly for taking your time helping!
  7. To clarify, I'm on RC5 now - but before starting array, I zpool import -a which brought in my hdd & nvme pools. The status are good and shares NFS/SMB work great. Then I created a pool in the UI called "hdd" which it let me, and I added the 8 drives with raidz & 2 groups of 4 devices. The pool shows up, but still has the "Unmountable:" error. However, everything seems to work fine, except that I had to increase permissions for some reason on my appdata/docker folder for a few containers - adminer & nextcloud.
  8. tower-diagnostics-20230510-1309.zip Forgot to attach this on original, thank you and sorry for having to waste time requesting! Started looking, I can see mention of my nvme.cfg under shares, but want to remove it completely so I can eventually reuse it when figure this out. That would be a quick win for me, thank you!
  9. I tried it initially following the paragraph from the RC1 & RC3 blog posts. That didn't work initially, because I seem to have some ghost user shares in a config file (not showing in UI Shares) with the same names as my zfs pools - nvme & hdd. I created new pools as above, and had to choose new names. After adding the new pools of 8 drives (2 raidz1 vdevs) & 1 nvme (raidz0), it changed my pool names. It couldn't find any data and it had the "Unmountable: Unsupported or no file system" error. I kinda freaked, and rolled back to 11. Still couldn't find my data, and then tried zpool import -a and it worked. To fix my existing shares/docker mappings, I zpool export <new pool names> zpool import <new pool names> <original pool names> Fine, back to "normal" except system was acting funny...ZFS pools wouldn't auto import at boot and weird docker behavior. Today I tried again, and just simply zpool import <original pool names>, and while it worked initially - I can't get them properly into a new pool. Still shows "Unmountable: Unsupported or no file system" even though everything else works fine. Questions: Any value to using my zfs pools as unRAID pools vs just keeping them "segregated"? If there is value, is there anything in my configuration that precludes importing them in as is in the current release? How should I set them up with "cache" settings when I don't really require that type of backend mgmt? System runs fabulously as is, with 2 flash drives as my "fake array", and 2 SSDs as my only pool to host my "appdata" - mainly docker folders, as i was having some crazy pain around how ZFS was creating infinite snapshots of their images.
  10. I used the gui, and it wouldn't let me use the existing pool names. So I used new ones - but then it changed their pool names. Just I just imported them after downgrading, and then exported <old name> and imported <new name>. Do we have clear instructions on migrating? It seems like it would have been fine to stay on 12, and just do the same thing... It just wasn't what was documented.
  11. I'm gonna need a proper writeup with pictures before I try that again. haha! It wouldn't let me reuse my pool names because it thinks I still have them as user shares. I deleted them forever ago and just use sharenfs locally, anyone have any idea where I can completely remove mentions of old user shares for next time I try this upgrade out? Thoughts on why it may have caused issues...I still had my drives "passed through" so I didn't accidentally mount them via Unassigned Disks. ****ZFS update scare - Rolled-back & zpool import -a + export/import to revert back pool names and set mountpoint back to /zfs/<pool>**** I really hope I didn't bork up my perfectly running 11.5 system... I followed these exact steps - "Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report)." Drives are showing empty, and neither original pools can be imported. It even renamed my 1st partition the new pool names (since I couldn't reuse my existing names - nvme & hdd because unraid thinks I have user shares named that still). I'm trying to roll back now, but OMG...haha! I didn't see any pop-up like above warning of formatting, so hoping it's just a "fun" scare. Should I have tried importing it via zpool import???
  12. Hello, not sure what happened on my end - but I haven't been able to successfully download the updated driver. I've been stuck on 520.56.06 for a bit. I keep the window open so it keeps "Trying to redownload the Nvidia Driver v530.41.03", and I can keep it open for days - no dice. Any way to manually download that driver and update it via cli, vs this plugin? To be clear, all of my nvidia dockers & vms work great - love that I can do several streams and concurrently use on multiple containers/vms! Thank you. **Edit** tried to just delete it from /config/plugins/nvidia-driver/packages/5.19.17 and have it re-download again. It quickly downloaded the same file - but still showing stuck on the same "Downloading...This could take some time..." window
  13. I found the options and syntax needed toake sharenfs work great! Tried replicating it via gui and/or go file, but they aer frustrating. Also added nfs to windows 11 pro, couldn't be happier.
  14. same issue...can't figure out what's broken.
  15. @HoopsterThank you for the quick reply. I haven't made a backup in sometime, but will try this this evening. Anyone else here with an AMD Ryzen 5 CPU - does your normal unRAID boot option show that option? Most things I'm seeing online says it should say amd_iommu=on iommu=pt or that it's unnecessary. Worst case, is it possible to download 6.10.3 and "start over"? I can save my docker templates, and my config files are on a separate zfs nvme pool. Probably just need to redo my cache ssd's which hold my docker subsystem (not using .img, but folders on the SSDs). Would that work as well? Thanks again to @Hoopster for advice! Used an old backup and looked up syslinux.cfg - didn't have the offending amd_iommu option. Deleted it from current cfg and voila! Not sure how it got there, but great lesson in using backups to validate changes and make them more often...