Jump to content

Persistant SMB files on boot


Enessi
Go to solution Solved by MAM59,

Recommended Posts

Heya folks, 
completely new to unraid and I have some questions about /etc/samba/smb-shares.conf persistence. I understand that it isn't a persistent folder/file zone. I have a ZFS filesystem which doesn't seem to want to remain available via SMB after reboot. Each reboot I have to go into the smb configuration files and re-add my share info. Once I do that, I can access, move, delete, etc everything. My dockers containers access everything with or without the edited smb file.

I have tried editing my go file but am not really confident in there much less having any sucess.


What I'd like to know is if there is a more graceful way of going about this or if my approach as been fundamentally flawed in some way. My searches online for others having this issue hasn't turned up much of anything. plz halp!

Edited by Enessi
spelling :O
Link to comment

unraid runs in a RAM disk, so everything you change manually is lost after a reboot.

 

Only the path /boot is on the flash drive and persistent.

What You are looking for is the folder /boot/config/shares.

 

But you should not modify it manually, use the web interface to create the share, it will automatically create and save the needed files in that folder.

 

But this is only possible for folders within the array disks. If you want to share an outside folder you can use a plugin, but thats not recommended because everything outside the array is not considered to be safe. Look for "unsassigned devices" for this if you dare.

 

Link to comment

Thanks for the reply, MAM59!
To clarify, are you talking about going to Unassigned Devices in the Main tab, selecting the gear icon next to the ZFS device and enabling the SMB toggle?
If so, will this apply to the whole zpool or just the one disk or will I have to go into each device and enable that toggle? Also, will that enable smb for my windows machines to access the data already stored on them?

smb toggle.png

Link to comment

you got the idea 🙂

Was this what you were looking for ?

 

(never had a zfs pool to share, so I don't know if it means the whole pool or only part. give it a try, you can't break anything with these experiments (unless you format the drive). But I think it means ALL zfs member disks at once)

 

 

Link to comment

On the face of it all, yes! This is what I was looking for. However, I enabled share on both the disk itself as well as the tank partition and applied the same for each disk but they still do not show up in the network share. I also restarted the array as well as rebooted and it is the same thing.

Link to comment
12 minutes ago, Enessi said:

One more thing! When I run ls /boot/config/shares/ I get the attached pic. Shouldn't, on boot, unRAID load those configs and become accessible over the network?

shares.png

yes, if you have set the array to autostart.

44 minutes ago, Enessi said:

However, I enabled share on both the disk itself as well as the tank partition

hmm, i'm not sure but I guess, you should only share the "tank" thing. Sharing the Disk may result in security problems. Usually you only address a disk directly if you try to pass it on to a VM or so

 

One note: I don't think that unassigned devices uses the same "share" folder than the unraid os. I don't have an unassigned disk currently to play with. But I guess I remember that for each filesystem you can switch on automount at boot time seperately too.

 

 

Link to comment

I've set the array to autostart. I removed the disk share as well. Seems you are correct, unraid doesn't use the share tab for zfs. Interestingly, when I share the tank partition the disk itself updates to shared as well. I'll continue looking through things and report back if I find something else or other odd behavior. Thanks for the help :)

Link to comment

btw, I assume your "tank" thing already contains data? If not, you can have it much more easy.

UNRAID uses "cache pools", drop the word "cache" and what you have left is an automatically managed zfs pool. Just add all the disks you like and it will create a zfs pool with the best matching RAID level.

 

But, then, you have UNRAID, its much better if you just create an array and put in all but one disk. Leave out the biggest disk and add it later as the parity drive. UNraid does not use RAID (therefor the name) but has a much more flexible approach. The drives do not need to be all the same size, you can any size as long as the parity is the biggest of all.

And, the single disks are independent, you can take them out and use the data outside the array if needed.

 

Maybe your demands are better provided with a complete different setup?

 

Link to comment

I may have gone down the wrong rabbit hole to begin with lol.

I have 4x16tb with data already copied to them ~8tb in total. I could copy them back to the source drive but I don't want to deal with another 20hr copy if I can avoid it. Hence this original post as it does work until reboot anyways..
And to verify, so unraid's "cache pool" is a zfs pool but operating as a traditional cache unless I set the flag "use cache" to "only" in which case it will retain all data written to the pool until filled up?

 

If this is the case I may suck it up and move everything around again.

Link to comment
  • Solution
25 minutes ago, Enessi said:

And to verify, so unraid's "cache pool" is a zfs pool but operating as a traditional cache unless I set the flag "use cache" to "only" in which case it will retain all data written to the pool until filled up?

Yep 🙂

but beware, if you create a new cache pool, it will format your drives and the data is lost!

Although its already zfs, UNRAID demands a certain partition size/structure, else it will refuse the drives and demands formatting.

You may try but it depends dramatically how and with which os your "tank" has been created originally

 

Usually ZFS pools on UNraid are one RAID 1 (mirrored) drives for beeing used as cache. I prefer XFS filesystems, even for cache (no need for Mirror, most files are moved off the caches and my faith in NVMe SSDs is huge 🙂 )

ZFS sometimes gives mysterious troubles, I dont like troubles.

 

Link to comment

I went with zfs partially due to the hype about the FS's benefits. I originally had this whole thing set up as xfs but my friend was recommending zfs instead so I had made the switch. There seems to be some controversy about zfs vs xfs and so I decided to go with my friend's recommendation. I originally had no issues with xfs and everything functioned as intended w/o much work. ZFS on the other hand has been.. temperamental. Certainly due in part to my ignorance but equally of it's obtuse implementation in unraid. Thus far anyways.

I'll consider things and make my final desicion as to whether i keep things as is, move to a cache pool, or back to xfs. Regardless though, thanks fr your help in answering my questions on this. It's been a journey! 

Link to comment

you should consider to copy back the data and start from scratch.

 

Create an Array with just 2 drives. Leave out the parity YET. (If you dont' need the 4th drive for copying now, you may create the array with 3 drive right from the beginning)

Copy over your data to the array (note: every top level folder in the array will automatically become a share).

Now add the parity drive (it will take several hours, depending on disk speed. Here, I have 18Tb Data and 20Tb Parity, this takes about 1d, 3hrs). You may already use the data, or add files, but it will be SLOW. Maybe its better to take a nap instead or go out for a vacation.

 

At the end you will have a protected array like your ZFS with RAID5, but with several advantages for the future.

 

 

Link to comment

Yeah, The original source disk where all the data was from is a 16tb as well so moving things back to it wouldn't be an issue at all, just a time sink.  So recommendation would be to copy all data from current zpool back to the 16tb in my pc, format the zfs disks, add to array (all but 1 for parity) copy back to the new array. Once finished, add parity drive and wait for sync? Now this would be an xfs array, not the cache pool, correct?

Link to comment

yes this would be a native UNraid xfs array, the thing you have bought UNraid for 🙂

Then go out and buy one or more SSDs (NVMe prefered). Depending on your LAN Speed, a "normal" SATA SSD is enough for 1Gb/s LANs, if you go 10Gbe you need at least a PCIE3.0 NVMe drive. Set the cache entries for the Shares to "yes" and fire up the Mover once per day or with a different schedule that suits your needs

For "always fast" Data you should consider another NVMe SSD (or 2 for RAID1 if you are not relying on the promises of the vendors) and create another Cache Pool with a different Name (and it wont be a "cache" but just a normal, but fast , data area).

Create Shares on this "pool" that are frequently used and often written documents (for instance the home share of the windows box).

 

But of course, you still need a backup device, no RAID or UNRAID saves you from Backups!

(I for instance have a windows box with the same number and sizes of disks that is fired up once a day, runs a robocopy to the unraid box and shuts itself off again then)

 

Time now at the beginning does not count, plan and experiment well and it will serve you longer...

 

(I never understood that zfs hype, especially on linux which only has limited zfs capabilities... I prefer it simple, reliable and fixable. And its easier to restore some files from a single disk but to reconstruct it from an RAID array)

 

Link to comment

@MAM59 You're a saint lol. Heaven forbid I use the thing I purchased for its intended purpose!! I believe I'll be going back to xfs then. Sorry ZFS-bois
This has been quite the adventure but at least I'll be back in familiar territory again :) I wish I could afford a 3-2-1 storage but I suppose I can be happy with 2-1. Regardless, I hope you have a great holiday w/e you are!

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...