miicar

Members
  • Posts

    137
  • Joined

  • Last visited

Everything posted by miicar

  1. I had the same issue on 2 of my servers. Seems as tho the "exclusive share" setting wasnt playing nice with recycle bin. The quick fix was to make it a pool only, reboot and change it back to exclusive share again. Seems to have fixed it (still cant understand why tho). I am monitoring this situation as my employee's use the recycle bin when they do dum stuff! Its been a saviour (till it randomly wasnt and i figured out that toggle to make it work again). Please report back if that fixes it!
  2. BTRFS pool that has a mix of 1tb and 4tb spinners. I recently added a 3rd 4tb disk, and it took over a month to finish! While it was happening, the process made the pool almost useless, and the docker accessing it constantly crashed and restarted due to the rebuild. So I stopped all reading/writing to the pool for a few hours (thinking it would speed it up), but the speed of the maintenance seemed to have stayed the same. The pool has only around 10tb total useable space...it would have only taken me a couple days to move the data off, distroy and remake the pool with the added disk, and move the data back...but a couple months for the designed rebuild process? WTF?!?! (And yes, there was tb's of unused space in the pool before the addition of the new 4tb drive) This isn't the first time i have had BTRFS pools be painfully slow to rebuild, add or remove a disk (in fact, i have done the move, distroy, move back for more critical data pools cuz it's much faster then waiting for the balance to work as designed) So im wondering if its something i am doing wrong or just how slow BTRFS rebuilds are? I have a BTRFS pool that has a mix of 1tb and 4tb spinners. I recently added a 3rd 4tb disk, and it took over a month to finish! While it was happening, the process made the pool almost useless, and the docker accessing it constantly crashed and restarted. I stopped all reading/writing to the pool for a few hours (thinking it would speed it up), but the speed of the maintenance seemed to have stayed the same. The pool is only 10tb total useable space...it would have taken me a couple days to move the data off, distroy and remake the pool with the added disk, and move the data back...but a couple months for the designed rebuild process? WTF?!?! (And yes, there was tb's of unused space in the pool before the addition of the new 4tb drive) This isn't the first time i have had BTRFS pools be painfully slow to rebuild, add or remove a disk! So im wondering if its something i am doing wrong or just how slow BTRFS rebuilds are?
  3. UPDATE: Seems to be working still, just did an update to 6.12.8 and everything is still good for the recycle bin! still curious if anyone else has had this issue, and why it might have happened!?!?!
  4. I also have this issue after updating to 6.12.8 I have deleted the folder, but i am adding a drive to a pool atm, so i cannot reboot and see if its fixed until that's done. I hope this gets escalated and fixed quickly as it seems directly related to the 6.12.8 update
  5. Anyone know how to change a cameras rstp URL after its set up? There doesn't seem to be any way to edit that info once the camera is set up, only copy it.
  6. yea that diag was after i did the share-array shuffle (no reboot). And yea, i use the %m tag to help my staff find their lost items (and snitch on who might have deleted it). I will keep a close eye on it now, and if it happens again, i'll post again. it was weird that my home server also had the same issue (with the same "fix"). My home server is the test bed for things i implement at the shop, so its practically identical (other then the size of it). I have used this plug-in for a few years without issues, up until now. I am using a mixed bag of pool formats, and i wonder if the new ZFS being baked in has caused (more) unintended consequences? just a hunch Thanks again for the app and your replies!
  7. So just to update: i checked some things on my home server. It is much simpler so it only has one exclusive share (steam games on a pool of old HDD's), that i always excluded from recycle bin anyway. Well, i decided to remove that exclusion from recycle bin, then restarted unraid (for good measure), and BAM! SAME ISSUE! Deleted files were not getting sent to recycle bin (the non exclusive shares didn't seem to have a ".recycle.bin" folder in them till now as well...i should have checked them first, but my suspicion is, they didn't work neither, as there is nothing showing in the recycle bin plugin, and i'm sure i have deleted a file or 2 in the last month)! THE FIX: I went into the share settings of that exclusive pool, toggled the secondary storage to "array" (also changed mover action to maintain files on pool...not sure if this is needed), Hit apply, then immediately changed secondary storage back to "none", and hit apply again! And it works now!! Again, I am still confused and somewhat annoyed! But thats the update so far! P.S..: Both unraid servers are on 6.12.6 and all plugins are updated to latest versions.
  8. ...well now I am pissed! I disabled exclusive shares (added the array as secondary) and then i read your message about it working. So i changed it back to pool only, and suddenly everything is fine...like the last 3 months through a few reboots etc it didn't work. Now it seems to work as expected!! ...i hate this...I hate not knowing what fixed things when i did nothing special. 🤬 The weirdest thing is, I noticed it wasn't working on ANY shares (even the non exclusive ones), now it seems they are all just fine everywhere. You have no idea how mad i am right now that it works... If you have any idea why cycling that one thing triggered it to work, i would love to know. I'll include my diag's for the hell of it...maybe you will see something there. Thanks for your prompt replies and great plugin. elmstorage-diagnostics-20240220-1926.zip
  9. excuse my ignorance with some terminology. I'll try to explain myself better... Recently, unraid implemented "exlusive shares", that bypass fuse via simlink (?) in the background. This allows us, in windows, to access a normal "//tower/sharename/", but in reality its going directly to //tower/poolname/sharename/ which speeds up access time. I had changed all my pool-only shares to "exclusive" a couple months back, to take advantage of this feature, assuming all was going to be handled as normal; but i didn't test the recycling bin part. Today, one of my staff just went into a panic cuz they couldn't find a file that they accidently deleted (your plugin has saved their a$$ many times in the past...i do owe you a beer, or 6). Upon investigating, i realized that nothing i deleted in these pools (accessed from windows through the normal share, NOT a disk share), was showing up in the .recycle.bin folder! Soon as I disabled exclusive shares for that pool, it was back to normal. This really sucks as there is a noticeable access time advantage using exclusive shares, but the recycle bin is a must as well!! If there is a way to have both, it would make me (and my staff) super happy! yea basically. I have a bunch of pools dedicated to different things (camera recording has a spinning rust pool, cad design has their own SSD pool, Sales/marketing has their own pool, the OG cache pool only does appdata and dockers; pools have their respective shares that are accessed as normal shares through //tower/sharename/ in windows). Its really handy for boosting R/W and access speeds! And yes, i should probably switch to proxmox, or something more industry geared, but unraid has worked amazingly well for what we do over the last 6 years, and even better since the multi-pool ability came a couple years back!
  10. So just to clarify, this recycle bin will NOT work on EXCLUSIVE SHARES (of course, deleted from a windows explorer browser)? Is there plans to have this plugin work with exclusive shares in the future? There is quite a access time benefit with having the exclusive shares set, but it seems allot of plugins and apps are having trouble with them since UNraid implemented it.
  11. BU = Backup UPDATE: Well that didn't work...in fact it removed the previous BU contents of a folder i changed BACK to an exclusive share (it left the empty folder tho to trick me)🤪. So back to the drawing board (or find another solution/manually). The simlinks show up, but LB doesn't follow them.
  12. so, i fixed this by changing the Container Path, and the Host Path of the Shares to "/mnt" only. Now its presenting itself normal looking in the LB gui! I also changed the access mode to read only for safety. I don't want to allow myself the ability to accidently select "Synchronize Source and Destination" and destroy my (current) life's work! Since my backup location is not on the same server this works...if i was using disks on the same server, i would have to turn it back to read write. That Synchronize option terrifies me tho, but that's a me problem. (update: all it did was present the simlinks. LB didn't actually follow them and back up their contents.) Just an observation: When choosing "Only Include", and having one folder in the include list, one would expect that folder to be all the LB would look at, but it doesn't. It seems to scan the whole "/mnt/user/" (source) folder first... So to speed things up for single share backups, point the Source directly to the share (either direct or the more preferred "/mnt/user/(share name)" method), not just its its parent folder.
  13. aite...so for the most part using "/mnt" as the mount point works to give me access to all my shares (regardless if they are exclusive or not). It still gives me "/mnt/user/user/" as a source, which is visually annoying, but i can live with it. Its actually backing up everything now (as i desire). SO, now that i got the basic functionality working, time to figure out how to add a secure BU only user and make my remote offsite BU connection a bit more safe. But thats another story, and i am pretty sure i understand how to pull that off. Thank you for pointing me in the right direction @ich777. (Stupidly enough, i vagally remember pulling my hair out with krusader before about this same issue...with the same solution.) Maybe there should be a note added to the install guide that gives users a clue how to get here??!!??!!
  14. binhex/arch-krusader HAHA i think you got me! I just looked the mount for this...and its indeed "/mnt"! I cant test it in LB atm as i'm doing a disk rebuild and i don't wanna f tings up while im half asleep, but i will give this a shot and report back. I kinda feel dum now. go crazy you say? i just might! im one of those weird people who likes to understand the inner workings of things, to understand the broader use...its a blessing and a curse.
  15. haha its 3:30am...not doing that tonight. thanks for your suggestions, i will figure it out! i don't give up. (and if that was the simple solution, not sure if it was me who was misunderstanding here)
  16. i just want to be able to say "/mnt/users/" and BU all my shares at once! like krusader does, for example. I don't want to forget that i moved things around to a different pool, to fit my needs, and miss backups for that share (or worse, it causes LB to crash and not backup at all). I'm trying to think long term/bigger picture. i'll keep poking at this till i figure it out! thanks for the suggestions.
  17. so your saying LB should be able to do that too?
  18. Well that didn't work! But to re-visit what you said about dockers not playing well with exclusive shares. Why is it that all my other dockers can read exclusive shares; including their install folder being "/mnt/user/appdata/(docker name)" not "/mnt/(pool name)/appdata/(docker name)", like we had to do to bypass FUSE, before the update in unraid? I changed ALL my dockers (and the folders they look at) back to "user", and haven't had a lick of problems? (Ironically enough, LB has "preserve symlinks" as a Command Option...) Not trying to be difficult...just trying to fix it!
  19. I will figure out how to do this, give this a try, and report back. I understand what you are saying. So how can we get it to work better? Is @kilrah, suggestion of mapping each share (and hoping you remember to update the mapping if things change a little bit), the only way? That seems silly! When i did give LB "/" mount point (instead of "/mnt/user/"), the simlinks show up, but as shortcuts (?) only and it doesn't actually transfer the data within (i, admittedly, haven't set each share in "include" section of LB, only "exclude"ed my temp folders i don't want backed up; i was hoping to just be able to exclude a couple folders that shouldn't be backed up and the rest is good...but its never that easy is it). I'm trying to backup my whole server share set...everything, and i'm trying to keep it simple so i don't have to teach an assistant rocket science. thanks for your help @ich777. sorry for the frustration, i hate it when things are not working properly, or need a micky mouse Band-Aid to work.
  20. you are correct i misspoke (i do understand this...just been up for more hours then humans should over the last week, and my brain stalled)...still the issue remains.
  21. This is not a solution, as sometimes the data is on BOTH the cache and the array (before mover runs)...also, i just tested it again, and i cannot map to the pool itself, unless i change the mount point to "/" use the "mnt/user/mnt/user/(sharename)", but thats giving LB way more access then is needed. There HAS to be a better way... Hopefully @ich777 can chime in with their thoughts.
  22. Make sure you guys have "Console Mode" selected in the Schedule profile...that stumped me for a bit too, but it does work on schedule after i checked that. Make sure you re-"cronIT !!" after making those changes. LuckyBackup.... except it doesn't work like that in LB (I'm guessing you didn't test this before replying). Also, this is the "old" way of doing things since unraid now does it automatically and all locations should be "mnt/user/(sharename)" and unraid decides if its exclusive or FUSE access, since the update! So my guess is LB needs an update to see this new feature!
  23. Thats good to hear...i guess the note is just left overs from before the workaround/fix was baked into unraid. I'll look into that later...unless its maybe causing conflict between the mpt3sas driver? UPDATE: So LSI called me back (and gained a ton of respect for doing so)! Apparently the LSI-9200 cards i have do not have thermal throttling, and could be the reason it hard crashed my kernel (and rebooted the machine) while running the backup last night. I'm going to have to keep an eye on that, and was advised to add a fan to the controller heat sync.
  24. Ok. I think i found the key to my problem. Like some other dockers i've run into, it hasn't been remapped to recognized multiple cache pools yet! This is just a guess, but when UNraid bypassed the FUSE layer (Exclusive share) when the share lives only on a cache pool, that seems to strip LB the ability to see the share anymore!! As soon as i change my secondary storage from none (the requirement for Exclusive Access) to "Array", LB sees the share...whether its hidden, or shared by SMB at all. BUT doing this ALSO removes that shares ability to Exclusive Access itself to the network; slowing down some of the r/w operations. So this is annoying! I don't feel smart enough to fix it and retain my Exclusive Access settings. I hope someone can tell me what i am doing wrong? Or do is LB just not able to see exclusive access shares, and I have to go back to writing manual R-Sync scripts for everything? I want to use my exclusive shares again and LB.
  25. all good. i have allot going on in this server! BUT, as a side note, should i be trying to flash my onboard HBA to a different/better firmware? Its worked for 2 years without any issues, but now i have added an expander to it, but it seems to be fine with everything still (as long as i keep it x16 disks or less...otherwise both myself and it has a mental breakdown and thats another 26 hour story). I did see this note under the mpt3sas driver info, "# limetech - Workaround for kernel crash with LSI 92xx based HBA cards (and may be others) options mpt3sas max_queue_depth=10000". And my addon card is a lsi-9200-8e. I'm guessing i have to set that in the cards bios? or is this in unraid somewhere??