smashingtool

Members
  • Posts

    158
  • Joined

  • Last visited

Posts posted by smashingtool

  1. On 7/26/2021 at 7:53 AM, SohailS said:

    Hi Hoping someone can help me.

     

    I have set syncthing up to sync the photos i take on my phone to a folder on my server but when i try to delete unwanted items it says i need permission from Server\nobody to make changes

     

    have i done something wrong here?

    I came here with the exact same question. I can tell you that going to Unraid webGui > Tools > New Permissions and then running it against the share(s) with your Syncthing files will fix it so that you can edit the files that Syncthing has moved around.

  2. 36 minutes ago, Squid said:

    You have a docker container that is referencing /mnt/user/disks  You should be able to see that on the docker page and hitting the caret to expand any of the path mappings.

    Hmm, I have several mappings to /mnt/disks, but i don't see anything to /mnt/user/disks...

     

    Oof, yeah, you're kinda right but in a different way. I do the following mappings with my dupeguru containers:

     

    /storage/disks <> /mnt/disks

    /storage <> /mnt/user

     

    I've always done that for the purpose of simpler file browsing in there, but that must be what is actually making the linkage

     

    Anyway, you're my hero, ill clean this up and that should fix it...

     

    Edit: Yep, all fixed. Thank you!

  3. This issue is biting me now and the fixes above have not worked. After a reboot the share gets recreated every time. I'm completely locked out of SMB access again after it was sorta working before one of the reboots...

     

    I never saw "disks" as a share in the gui, but I did see the folder in the root of one of my disk shares. I also saw a disks.cfg file in my flash drive somewhere, do i need to delete that (or NOT delete that)?

  4. 12 hours ago, trurl said:

    Since "single" doesn't have any redundancy, you could just forgo btrfs and make each disk XFS each in its own separate pool. They would all be part of user shares. I have my dockers and VMs on a "fast" pool which is just one NVME using XFS.

    Tried this, but from the looks of it, shares can only be assigned to one pool. So having one share span multiple pools doesn't seem possible.

  5. 4 hours ago, itimpi said:


    not quite sure what you want?

     

    in Unraid 6.9.0 you can set up multi-drive pools to use the “Single” btrfs profile which means the available space is the sum of the drives size, but the data is not protected by parity.

     

    if you want multiple arrays that work like the current data array where the available space is the sum of the data drives, but you can still have parity protection then this IS a future roadmap item.

     

    I want to be able to use SSDs in the array, but per the warnings from "Fix Common Problems", that could cause issues with rebuilding from parity due to SSD garbage collection.

     

    So what I currently do instead is use an extra 2 SSDs in unRaid via Unassigned devices. I also have a cache drive SSD, but have never messed with any pool functionality due to the warnings about BTRFS RAID1.

     

    What I ultimately want is to be able to have a user share spread out over multiple SSD drives. I'm not currently concerned with parity protection for said drives, but maybe down the road, some sort of fault tolerance would be worthwhile.

     

    I don't see how to do "multi-drive pools to use the “Single” btrfs profile". Per the help toggle in the UI, "When configured as a multi-device pool, Unraid OS will automatically select btrfs-raid1 format". https://wiki.unraid.net/Unraid_OS_6.9.0#Multiple_Pools also seems to lack a mention of this functionality, but maybe I am missing something.

     

    Having multiple pools seemed like a way of getting around using Unassigned devices, but based on "When you create a user share or edit an existing user share, you can specify which pool should be associated with that share. ", I assume that I can't have a user share span multiple pools...

  6. So I have an unassigned device that has some files with extremely long file names, long enough that path + filename can exceed the 255 character limit that most windows programs can deal with. I use a workaround where i create virtual shares(?) in smb-extra.conf that point to subdirectories several layers deep with a single character label. Example:

     

    [Q]
      path = /mnt/disks/Extra/Sync/Queue/Incoming/
      browseable = yes
      public = yes
      writeable = yes

    In this case, "Extra" is the unassigned device, and "Q" is the resulting share that points to the path above. Doing this keeps path + filename under 255 characters, solving my issue.

     

    What I just discovered however is that if I delete a file while browsing the "Q" share, it does not end up in the recycle bin. But if i delete a file in the same folder while using the full path starting at "Extra", it does get put into the recycle bin.

     

    Is there a way for me to config this so that it recycles the files when I am browsing via this shortened virtual share?

  7. None of my torrents start in deluge anymore, not sure exactly why.  My main private tracker says I am not connectable.

     

    I have what should be the relevant ports forwarded from my router to my unraid IP address. I also have upnp and whatnot turned on. Despite that, nothing is downloading. I feel like somewhere along the way I  may have broken something somewhere while tinkering.

     

    I do see this in the log:

     

    12:08:42 [WARNING ][deluge.i18n.util :83 ] IOError when loading translations: [Errno 2] No translation file found for domain: 'deluge'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'32pag.es'/": invalid hostname: b'32pag.es'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'stackoverflow.tech'/": invalid hostname: b'stackoverflow.tech'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'iptorrents.com'/": invalid hostname: b'iptorrents.com'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'empirehost.me'/": invalid hostname: b'empirehost.me'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'bakabt.me'/": invalid hostname: b'bakabt.me'

     

    I have no idea where that "b" at the end of "http://" is coming from, but that seems like a likely problem. Anyone have any idea?

  8. I am not sure how long this has been going on, torrents are kinda secondary for me but now i realize that i am dead in the water. New torrents added to Deluge do not download. My main private tracker says I am not connectable. Deluge has been my main torrenting container for a while, but after some troubleshooting, I went ahead and installed a fresh instance of the LinuxServer QBitTorrent container, but it has the exact same issue. Unable to even begin a download.

     

    I have what should be the relevant ports forwarded from my router to my unraid IP address. I also have upnp and whatnot turned on. Despite that, nothing is downloading. I feel like somewhere along the way I  may have broken something somewhere while tinkering.

     

    One of the hints that this is a larger networking issue is that my deluge log says:

     

    12:08:42 [WARNING ][deluge.i18n.util :83 ] IOError when loading translations: [Errno 2] No translation file found for domain: 'deluge'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'32pag.es'/": invalid hostname: b'32pag.es'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'stackoverflow.tech'/": invalid hostname: b'stackoverflow.tech'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'iptorrents.com'/": invalid hostname: b'iptorrents.com'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'empirehost.me'/": invalid hostname: b'empirehost.me'
    12:09:59 [WARNING ][deluge.httpdownloader :315 ] Error occurred downloading file from "http://b'bakabt.me'/": invalid hostname: b'bakabt.me'

    [Click and drag to move]

     

    I have no idea where that "b" at the end of "http://" is coming from, but that seems like a likely problem.

     

  9. I just lost 2 months of files due to this. i just wasn't thinking and made error after error after ERROR!. OMG. I almost did backups first, but i decided to tackle the problem i was having with mover first. Obliterated hundreds of gigs of files on my cache. WTF was I thinking?

     

    FML.

  10. On 2/28/2019 at 9:52 PM, leftovernick said:

    I've edited my /nginx/site-confs/default file with:

    
    location /ubooquity {
        proxy_pass http://10.0.1.200:2202/ubooquity;
        include /config/nginx/proxy.conf;   
    }

    and checked my ubooquity admin page to ensure the reverse proxy prefix is set to ubooquity as well as restarted letsencrypt.

     

    I'm not currently getting any error from letsencrypt, but I can't access https://ubooquity.mydomain.me or https://mydomain.me/ubooquity

     

    what step am I missing here?

     

    edit: ...I just realized (as I was guessing around) that it's set to XXXX.duckdns.org/ubooquity/

    is there a way to have it use my private domain? I'd much prefer to use ubooquity/mydomain.me or at the least mydomain.me/ubooquity 

     

    edit2: it also goes through a security warning when you visit the first time that you have to override... any way to stop that? 

     

    edit3: Okay, I've gotten a little further. I set up a file in the proxy-config folder titled ubooquity.subdomain.config (I basically just took another config file and changed it to match ubooquity info)

     

    
    # make sure that your dns has a cname set for ubooquity
    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name ubooquity.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        # enable for ldap auth, fill in ldap details in ldap.conf
        #include /config/nginx/ldap.conf;
    
        location / {
            # enable the next two lines for http auth
            #auth_basic "Restricted";
            #auth_basic_user_file /config/nginx/.htpasswd;
    
            # enable the next two lines for ldap auth
            #auth_request /auth;
            #error_page 401 =200 /login;
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_ubooquity ubooquity;
            proxy_pass http://$upstream_ubooquity:2202/ubooquity/admin;
        }
    
    
    }


    This takes me to an odd looking log in page (basically the log in with no CSS) and no matter what I enter, it won't let me through. When I check the ubooquity logs, I get this error (depending on login info I try)

     

     

    When I turn off "Protect shared content with user accounts" It takes me in to the main page with no CSS again and no comics or images (also the links don't go anywhere)

     

    Basically it looks like all it's sending through is the HTML, but no CSS or back end.

    I've reached the point where the same thing is happening to me. I'm having no luck fixing it. Have you resolved this?

  11. So I had this working until recently, but I moved and it seems that Cox blocks port 80 nowadays. I had been configured to use DuckDNS for my domain, with http verification. That no longer works at all, and my certs expired and now they can't renew. So I decided to try the DNS validation....Cloudflare doesn't seem to work with DuckDNS since my domain not a registered DNS. 

     

    Is my only option to buy a domain name? Is there any way to make the DNS validation work with DuckDNS domains?

     

     

  12. On 7/29/2018 at 4:31 AM, John_M said:

     

    They are actually numbered from 0 to 15 and they are paired even-odd. So threads 0 and 1 are physical core 0, threads 2 and 3 are physical core 1, etc. all the way to threads 14 and 15 being physical core 7. You'll see the arrangement if you switch to the Dashboard page of the webGUI. This is different from how an Intel processor is arranged, where in an i7, for example, threads 0 and 4 represent physical core 0, threads 1 and 5 are physical core 1, threads 2 and 6 are physical core 2 and threads 3 and 7 are physical core 3. I'm no sure why they are different but the AMD arrangement makes more sense to my simple mind.

     

    So that would mean 0-7 are the first CCX, and 8-15 are the second, right? 

    I assume that mixing and matching between the 2 CCXs would be bad. I'll likely give my VM sole access to a whole CCX, so 8-15, unless thats a bad idea for some reason.

  13. I built my current Windows 10 VM with virtio-win-0.1.126-2.iso, and I see now that there is a much newer stable version available. What would I need to do to update my VM with the newest VirtO? Do I need to do a full reinstall? Or is it simply just mounting the disk and then installing the handful of drivers I installed previously?

     

    I ask because I am getting some weird slowdowns and short lockups randomly, and I'm trying to rule out as many potential causes as possible.

     

    Thanks!

  14. I am also experiencing the CPU core issue. I assume that editing the XML manually will allow me to enable the other 3 cores that are greyed out to me, but I don't know exactly what that should look like. 

     

    I have an i3-6500, so 2 cores, meaning 4 logical cpus. The XML file currently shows:

     

     <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
      </cputune>

     

    Would I increment only the vcpu or both?

     

    Thanks.

  15. FWIW, the current fix is definitely usable, but it's still much less usable than it was a few versions ago.

     

    For example, I was just trying to delete things in picture mode, and had the detail window up in front of the results.

     

    I chose a few files to delete, and selected to delete them. Nothing happened, visually. I could not close the details window anymore, but I could still reduce the size of the results window. So I had to put the results window into window mode, and then move the detail window out of the way of the delete prompt so that I could choose to actually delete the files.

     

    Regardless of which mode I use, said delete prompt appears behind the results window. This is still usable, but not how the app used to work, and it definitely makes it more difficult to use.

  16. I'm having an issue on the current build where after choosing to delete some duplicates, the results window gets lost behind the initial window, and there is no way to get it back. The height and width are set to 1600 and 900. I'm not sure if my usage of the details window while choosing the files is contributing to the issue.

  17. Oh, I saw the notes about the changes to updates and notification changes, and had already updated my notification settings to include the OS updates 4 times a day.

     

    As for the new page, I guess I was expecting the plugin page to have a new tab for OS updates, due to it saying "enhanced plugin manager" with separate sections for both. Thanks for the info, I see it now under tools.

     

    Does the lack of a button for checking for updates mean that it checks every time you access/refresh the page as well?

     

    Thanks guys.

  18. Thank you CHBMB! I had it like this before:

     

        location / {
            proxy_pass https://192.168.1.130:2202/ubooquity;
            proxy_max_temp_file_size 2048m;
            include /config/nginx/proxy.conf;
        }

    But that was not working. Now I can get to it. I didn't realize that the reverse proxy prefix needed to be part of the location, rather than simply as part of the server_name.