Jump to content

Enessi

Members
  • Posts

    11
  • Joined

  • Last visited

Enessi's Achievements

Noob

Noob (1/14)

0

Reputation

  1. @MAM59 You're a saint lol. Heaven forbid I use the thing I purchased for its intended purpose!! I believe I'll be going back to xfs then. Sorry ZFS-bois This has been quite the adventure but at least I'll be back in familiar territory again I wish I could afford a 3-2-1 storage but I suppose I can be happy with 2-1. Regardless, I hope you have a great holiday w/e you are!
  2. Yeah, The original source disk where all the data was from is a 16tb as well so moving things back to it wouldn't be an issue at all, just a time sink. So recommendation would be to copy all data from current zpool back to the 16tb in my pc, format the zfs disks, add to array (all but 1 for parity) copy back to the new array. Once finished, add parity drive and wait for sync? Now this would be an xfs array, not the cache pool, correct?
  3. I went with zfs partially due to the hype about the FS's benefits. I originally had this whole thing set up as xfs but my friend was recommending zfs instead so I had made the switch. There seems to be some controversy about zfs vs xfs and so I decided to go with my friend's recommendation. I originally had no issues with xfs and everything functioned as intended w/o much work. ZFS on the other hand has been.. temperamental. Certainly due in part to my ignorance but equally of it's obtuse implementation in unraid. Thus far anyways. I'll consider things and make my final desicion as to whether i keep things as is, move to a cache pool, or back to xfs. Regardless though, thanks fr your help in answering my questions on this. It's been a journey!
  4. I may have gone down the wrong rabbit hole to begin with lol. I have 4x16tb with data already copied to them ~8tb in total. I could copy them back to the source drive but I don't want to deal with another 20hr copy if I can avoid it. Hence this original post as it does work until reboot anyways.. And to verify, so unraid's "cache pool" is a zfs pool but operating as a traditional cache unless I set the flag "use cache" to "only" in which case it will retain all data written to the pool until filled up? If this is the case I may suck it up and move everything around again.
  5. I've set the array to autostart. I removed the disk share as well. Seems you are correct, unraid doesn't use the share tab for zfs. Interestingly, when I share the tank partition the disk itself updates to shared as well. I'll continue looking through things and report back if I find something else or other odd behavior. Thanks for the help
  6. One more thing! When I run ls /boot/config/shares/ I get the attached pic. Shouldn't, on boot, unRAID load those configs and become accessible over the network?
  7. I also read through the Unassigned Devices topic about the plugin itself and I THINK that should have done it, but well, nah
  8. On the face of it all, yes! This is what I was looking for. However, I enabled share on both the disk itself as well as the tank partition and applied the same for each disk but they still do not show up in the network share. I also restarted the array as well as rebooted and it is the same thing.
  9. As a follow-up, I also see that if I select the gear toggle for the tank partition on the device I get a "share" toggle for that as well..
  10. Thanks for the reply, MAM59! To clarify, are you talking about going to Unassigned Devices in the Main tab, selecting the gear icon next to the ZFS device and enabling the SMB toggle? If so, will this apply to the whole zpool or just the one disk or will I have to go into each device and enable that toggle? Also, will that enable smb for my windows machines to access the data already stored on them?
  11. Heya folks, completely new to unraid and I have some questions about /etc/samba/smb-shares.conf persistence. I understand that it isn't a persistent folder/file zone. I have a ZFS filesystem which doesn't seem to want to remain available via SMB after reboot. Each reboot I have to go into the smb configuration files and re-add my share info. Once I do that, I can access, move, delete, etc everything. My dockers containers access everything with or without the edited smb file. I have tried editing my go file but am not really confident in there much less having any sucess. What I'd like to know is if there is a more graceful way of going about this or if my approach as been fundamentally flawed in some way. My searches online for others having this issue hasn't turned up much of anything. plz halp!
×
×
  • Create New...