smashingtool

Members
  • Content Count

    156
  • Joined

  • Last visited

Community Reputation

0 Neutral

About smashingtool

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hmm, I have several mappings to /mnt/disks, but i don't see anything to /mnt/user/disks... Oof, yeah, you're kinda right but in a different way. I do the following mappings with my dupeguru containers: /storage/disks <> /mnt/disks /storage <> /mnt/user I've always done that for the purpose of simpler file browsing in there, but that must be what is actually making the linkage Anyway, you're my hero, ill clean this up and that should fix it... Edit: Yep, all fixed. Thank you!
  2. This issue is biting me now and the fixes above have not worked. After a reboot the share gets recreated every time. I'm completely locked out of SMB access again after it was sorta working before one of the reboots... I never saw "disks" as a share in the gui, but I did see the folder in the root of one of my disk shares. I also saw a disks.cfg file in my flash drive somewhere, do i need to delete that (or NOT delete that)?
  3. Tried this, but from the looks of it, shares can only be assigned to one pool. So having one share span multiple pools doesn't seem possible.
  4. I want to be able to use SSDs in the array, but per the warnings from "Fix Common Problems", that could cause issues with rebuilding from parity due to SSD garbage collection. So what I currently do instead is use an extra 2 SSDs in unRaid via Unassigned devices. I also have a cache drive SSD, but have never messed with any pool functionality due to the warnings about BTRFS RAID1. What I ultimately want is to be able to have a user share spread out over multiple SSD drives. I'm not currently concerned with parity protection for said drives, but maybe down the road, some
  5. Am I understanding correctly that if I want to set up multiple SSDs in a "pool" where the pool size is the sum of each drive's size, that I need to(or should?) wait until the mentioned future support of "Unraid array pools"?
  6. So I have an unassigned device that has some files with extremely long file names, long enough that path + filename can exceed the 255 character limit that most windows programs can deal with. I use a workaround where i create virtual shares(?) in smb-extra.conf that point to subdirectories several layers deep with a single character label. Example: [Q] path = /mnt/disks/Extra/Sync/Queue/Incoming/ browseable = yes public = yes writeable = yes In this case, "Extra" is the unassigned device, and "Q" is the resulting share that points to the path above. Doing this keeps path
  7. I have also been struggling with this. I'm tempted to try 6.8.1RC1 since that updates Samba to 4.11.4 Interesting, what's the performance like when you use IP address?
  8. None of my torrents start in deluge anymore, not sure exactly why. My main private tracker says I am not connectable. I have what should be the relevant ports forwarded from my router to my unraid IP address. I also have upnp and whatnot turned on. Despite that, nothing is downloading. I feel like somewhere along the way I may have broken something somewhere while tinkering. I do see this in the log: 12:08:42 [WARNING ][deluge.i18n.util :83 ] IOError when loading translations: [Errno 2] No translation file found for domain: 'deluge' 12:09:59 [WARNING ][delu
  9. I am not sure how long this has been going on, torrents are kinda secondary for me but now i realize that i am dead in the water. New torrents added to Deluge do not download. My main private tracker says I am not connectable. Deluge has been my main torrenting container for a while, but after some troubleshooting, I went ahead and installed a fresh instance of the LinuxServer QBitTorrent container, but it has the exact same issue. Unable to even begin a download. I have what should be the relevant ports forwarded from my router to my unraid IP address. I also have upnp and whatnot
  10. I just lost 2 months of files due to this. i just wasn't thinking and made error after error after ERROR!. OMG. I almost did backups first, but i decided to tackle the problem i was having with mover first. Obliterated hundreds of gigs of files on my cache. WTF was I thinking? FML.
  11. As with someone else, I would like to request some conf.sample files for the LinuxServer Ubooquity container. I am trying to set it up with DuckDNS validation and am utterly failing to make it work. I've tried everything that used to work for me, and that I have found in this topic and the Ubooquity topic that pertain to it.
  12. I've reached the point where the same thing is happening to me. I'm having no luck fixing it. Have you resolved this?
  13. Any idea why Mylar now shows "None (master)" as the version at the bottom?
  14. So I had this working until recently, but I moved and it seems that Cox blocks port 80 nowadays. I had been configured to use DuckDNS for my domain, with http verification. That no longer works at all, and my certs expired and now they can't renew. So I decided to try the DNS validation....Cloudflare doesn't seem to work with DuckDNS since my domain not a registered DNS. Is my only option to buy a domain name? Is there any way to make the DNS validation work with DuckDNS domains?
  15. So that would mean 0-7 are the first CCX, and 8-15 are the second, right? I assume that mixing and matching between the 2 CCXs would be bad. I'll likely give my VM sole access to a whole CCX, so 8-15, unless thats a bad idea for some reason.