ZekerPixels

Members
  • Posts

    36
  • Joined

  • Last visited

Posts posted by ZekerPixels

  1. I have just swapped the parity drive, done rebuilding and check. But I needed more space, the original plan was to add a Disk7. I have decided to not to a Disk7, just to be safe of potential issues with removing Disk7 at a later point. So, with the recent addition of the 16TB drive, I can get 4TB of additional storage on the array using the drive I have now. I just have to expedite the purchase of another drive to exchange.

     

    Plan for now adding 4TB to the array;
    Parity1; 16TB
    Parity2; 8TB
    Disk1;   4TB
    Disk2;   8TB
    Disk3;   8TB
    Disk4;   8TB
    Disk5;   8TB
    Disk6;   8TB

     

  2. Hello everyone,

     

    I have some questions which come up and cant seem to find a clear answere, all regarding, parity, swapping, adding and removing drives. I will describe it in steps what I'm doing or planning, to have a story to the questions.

    To no surprise, I need more space. I have been looking out for good hdd prices and haven't found anything of interest over the past year or so. I have price alerts set, but prices in my region price don't seem to drop. Anyways, I got one drive to swap in the parity and temporary use it as a additional drive, too increase storage for now.

     

    0. Current Configuration

    I use dual parity and while exchange disks I want to hold the parity valid.

    Original Configuration;
    Parity1; 8TB
    Parity2; 8TB
    Disk1;   4TB
    Disk2;   4TB
    Disk3;   8TB
    Disk4;   8TB
    Disk5;   8TB
    Disk6;   8TB

     

    1. Swapping parity

    I got a new drive which is the now my biggest drive, so it has to become the parity drive. (precleared the disk and zerod)

    This is easy to do by unassigning the old drive and assigning the new drive, where it begins to rebuild parity1 and parity2 stay valid. So, I assign a zeroed disk as parity1, which is basically a XOR, right? Well, during the rebuild it passes the 4TB point, where those disks spin down and it continues to rebuilt. It passes the 8TB point and those disks spin down. But now it is still continues writing to rebuild the parity for hours, but why? Because we already know the 8TB left of the 16TB drive is all zeros, because we have done the pre-read, zero and post-read. There are no other disks, so its zero. and XOR 0 and 0, stays zero.

    New Configuration 1;
    Parity1; 16TB
    Parity2; 8TB
    Disk1;   4TB
    Disk2;   4TB
    Disk3;   8TB
    Disk4;   8TB
    Disk5;   8TB
    Disk6;   8TB

     

    2. Adding a disk

    As I mentioned, I need space and now have a 8TB drive left. I will also preclear/zero this drive and add it to the array as Disk7. I have not done this before and most information is regarding a single parity drive. But I assume a assigning a additional disk to the array, which is zerod, parity1 based on XOR stays valid. I'm not sure about parity2, it is Reed-Solomon based but idk does it need to rebuild?

    New Configuration 2;
    Parity1; 16TB
    Parity2; 8TB
    Disk1;   4TB
    Disk2;   4TB
    Disk3;   8TB
    Disk4;   8TB
    Disk5;   8TB
    Disk6;   8TB
    Disk7;	 8TB

     

    3. Future

    As I have now added a 16TB drive, I have the intention to eventually add/swap in more big drives. As well as removing Disk7 from the array, to a total of 8 array drives. I want to remove Disk7 again, because it would be the only drive which isn't mounted on dampers and to reduce noise and power usage from the additional drive.
    As described before swapping a disk in, is assigning the new drive an let it rebuild. But now removing Disk7 and I'm specifically looking to do this without breaking parity1, because I think parity2 will always need a rebuild. So I need to remove all the data off to the other disks and then zero the drive so parity with XOR stays valid. Has anyone actually done this https://docs.unraid.net/unraid-os/manual/storage-management/#alternative-method

    Possible Future Configuration;
    Parity1; 16TB
    Parity2; 16TB
    Disk1;   16TB
    Disk2;   16TB
    Disk3;   8TB
    Disk4;   8TB
    Disk5;   8TB
    Disk6;   8TB
  3. 1 hour ago, dlandon said:

    Can UD mount the disk?

    Yes, I can mount the disk, share the partition over the network, read and write data to it. Everything works, except mounting this drive to the vm.

     

    I have now tried multiple usb storage devices to pass trough, and they all work as expected;

    - usb sticks of various sizes formatted FAT and exFAT

    - usb portable 1TB ntfs hdd

    - usb portable 256GB ntfs ssd

    - usb exclosure and a usb hdd dock with various 500GB, 1TB, 1,5TB, 2TB, 3TB hdds ntfs or zerod in which case they show up in the w10 disk management.

     

    However, using the usb enclosure, which is old, with a 4TB ntfs hdd does mount but is not correctly recognized (it reports 16TB). Using the usb dock, this leads to thesame "reset SuperSpeed USB device number 2 using xhci_hcd" as before, with the 8TB external.

    I tried both the external enclosure and hdd dock, with his 4TB drive on a windows machine to verify. and the 4TB also doesn't work in the external enclosure and reports 16TB idk. (as i mentioned, it quite old so it could be a limitation) The 4TB disk with the hdd dock is fine on a windows machine.

     

    I only have one 4TB available and the 8TB is an external, so I cant test more. But it appears I only have this vm mounting issue only with big >4TB drives. Everything else works in unRAID and they also work under windows.

     

    I also tried passing the usb at boot of the vm (so, not using the Libvirt Hotplug USB plugin), but this leads to thesame "usb 2-5: reset SuperSpeed USB device number 13 using xhci_hcd"

  4. 1 hour ago, itimpi said:

    If you click on the Settings icon for a particular drive under UD then it is a toggle there.   It needs to be set as otherwise UD will take control of the drive stopping the Libvirt Hotplug USB handling it.

    I'm currently using that drive with the UD share mounted. Anyways, I didn't see the passed trough toggle initially, because that option disappears when it is mounted.

     

    Trying to pass trough my 8TB seagate desktop smr nfts hdd, while doing exactly thesame, same settings, gets stuck every time on that reset line after synchronizing in the log.

    sd 8:0:0:0: [sdl] Synchronizing SCSI cache
    usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd
    usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd
    usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd
    usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd
    

     

    I posted a part of the system log before, with a usb that I did manage to mount in the vm and one that doesnt. I tried it again, paying attention to the toggles in the UD settings, but they are set thesame.

    Meanwhile I also tried various usb sticks (FAT, exFAT), a 1TB portable ntfs hdd and they mount to the vm directly every time. However I when trying my hdd dock (which i havent used on unraid before) with other big hdds, it doesnt work either.

  5. 2 hours ago, itimpi said:

    If you want to pass the USB drive through to the VM have you made sure the Passed Through flag is set for that drive in its UD Settings?

    A individual drive setting, just to confirm where can I find the passed through flag.

    If you refer to the vm settings, then no not checked, but also not needed because of the Libvirt Hotplug USB plugin. Which works perfectly to mount usb drives like a usb stick and other hdds.

     

    1 hour ago, dlandon said:

    The serial number of that disk is woefully incomplete.  The only thing there is the model number.  There is no seril number.  I'd say that Linux can't handle that.

    Post diagnostics so I can see what udev reports as the serial number on that disk.  Sometimes some strange characters in the serial number can cause issues.

    Sorry, that was my part in anonymizing information, serial number, my server also isnt called tower, etc.

  6. Hi, all,
    I'm trying to mount a usb disk to a w10 vm, somehow I sill haven't been able to figure out why I cant get it to work with the disk I need to. (as temporary solution I have it mounted as a network share)

     

    I have the latest version of Unassigned Devices and Libvirt Hotplug USB plugins installed. So, what I do is; plug in the usb drive, it appears in the Unassigned Devices section on the main page and I go to the vms tab. Under Hotplug USB, I select the running w10 vm and usb device, and click attach.

    Doing this, with a usb drive (fat, exfat) or another 1TB usb hdd (ntfs), will make it pop up in the vm. Exelent, however the drive I need to connect is won't do this.

     

    Here the system log of me plugging in a usb, attaching, detaching and removing the drive;

    Nov 25 19:01:50 tower kernel: usb 1-5: new high-speed USB device number 4 using xhci_hcd
    Nov 25 19:01:50 tower kernel: usb-storage 1-5:1.0: USB Mass Storage device detected
    Nov 25 19:01:50 tower kernel: scsi host8: usb-storage 1-5:1.0
    Nov 25 19:01:51 tower kernel: scsi 8:0:0:0: Direct-Access     Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 4
    Nov 25 19:01:51 tower kernel: sd 8:0:0:0: Attached scsi generic sg11 type 0
    Nov 25 19:01:51 tower kernel: sd 8:0:0:0: [sdl] 60437492 512-byte logical blocks: (30.9 GB/28.8 GiB)
    Nov 25 19:01:51 tower kernel: sd 8:0:0:0: [sdl] Write Protect is off
    Nov 25 19:01:51 tower kernel: sd 8:0:0:0: [sdl] Mode Sense: 45 00 00 00
    Nov 25 19:01:51 tower kernel: sd 8:0:0:0: [sdl] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
    Nov 25 19:01:51 tower kernel: sdl: sdl1
    Nov 25 19:01:51 tower kernel: sd 8:0:0:0: [sdl] Attached SCSI removable disk
    Nov 25 19:01:52 tower unassigned.devices: Disk with ID 'Kingston_DataTraveler_2.0_-0:0 ()' is not set to auto mount.
    Nov 25 19:02:28 tower kernel: br0: port 3(vnet1) entered blocking state
    Nov 25 19:02:28 tower kernel: br0: port 3(vnet1) entered disabled state
    Nov 25 19:02:28 tower kernel: device vnet1 entered promiscuous mode
    Nov 25 19:02:28 tower kernel: br0: port 3(vnet1) entered blocking state
    Nov 25 19:02:28 tower kernel: br0: port 3(vnet1) entered forwarding state
    Nov 25 19:05:09 tower kernel: usb 1-5: reset high-speed USB device number 4 using xhci_hcd
    Nov 25 19:05:10 tower kernel: usb-storage 1-5:1.0: USB Mass Storage device detected
    Nov 25 19:05:10 tower kernel: scsi host8: usb-storage 1-5:1.0
    Nov 25 19:05:11 tower kernel: scsi 8:0:0:0: Direct-Access     Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 4
    Nov 25 19:05:11 tower kernel: sd 8:0:0:0: Attached scsi generic sg11 type 0
    Nov 25 19:05:11 tower kernel: sd 8:0:0:0: [sdl] 60437492 512-byte logical blocks: (30.9 GB/28.8 GiB)
    Nov 25 19:05:11 tower kernel: sd 8:0:0:0: [sdl] Write Protect is off
    Nov 25 19:05:11 tower kernel: sd 8:0:0:0: [sdl] Mode Sense: 45 00 00 00
    Nov 25 19:05:11 tower kernel: sd 8:0:0:0: [sdl] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
    Nov 25 19:05:11 tower kernel: sdl: sdl1
    Nov 25 19:05:11 tower kernel: sd 8:0:0:0: [sdl] Attached SCSI removable disk
    Nov 25 19:05:12 tower unassigned.devices: Disk with ID 'Kingston_DataTraveler_2.0_-0:0 ()' is not set to auto mount.
    Nov 25 19:06:50 tower kernel: br0: port 3(vnet1) entered disabled state
    Nov 25 19:06:50 tower kernel: device vnet1 left promiscuous mode
    Nov 25 19:06:50 tower kernel: br0: port 3(vnet1) entered disabled state
    Nov 25 19:07:51 tower kernel: usb 1-5: USB disconnect, device number 4

     

    However I have a 8TB Seagate desktop drive which wont connect, here the system log;

    Nov 25 20:30:03 tower kernel: usb 2-5: new SuperSpeed USB device number 2 using xhci_hcd
    Nov 25 20:30:03 tower kernel: usb-storage 2-5:1.0: USB Mass Storage device detected
    Nov 25 20:30:03 tower kernel: scsi host8: usb-storage 2-5:1.0
    Nov 25 20:30:04 tower kernel: scsi 8:0:0:0: Direct-Access     Seagate  Desktop          040B PQ: 0 ANSI: 6
    Nov 25 20:30:04 tower kernel: sd 8:0:0:0: Attached scsi generic sg11 type 0
    Nov 25 20:30:04 tower kernel: sd 8:0:0:0: [sdl] Spinning up disk...
    Nov 25 20:30:17 tower kernel: ............ready
    Nov 25 20:30:17 tower kernel: sd 8:0:0:0: [sdl] Very big device. Trying to use READ CAPACITY(16).
    Nov 25 20:30:17 tower kernel: sd 8:0:0:0: [sdl] 15628053167 512-byte logical blocks: (8.00 TB/7.28 TiB)
    Nov 25 20:30:17 tower kernel: sd 8:0:0:0: [sdl] 2048-byte physical blocks
    Nov 25 20:30:17 tower kernel: sd 8:0:0:0: [sdl] Write Protect is off
    Nov 25 20:30:17 tower kernel: sd 8:0:0:0: [sdl] Mode Sense: 4f 00 00 00
    Nov 25 20:30:17 tower kernel: sd 8:0:0:0: [sdl] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Nov 25 20:30:17 tower kernel: sdl: sdl1
    Nov 25 20:30:17 tower kernel: sd 8:0:0:0: [sdl] Attached SCSI disk
    Nov 25 20:30:18 tower unassigned.devices: Disk with ID 'ST8000DM004- ()' is not set to auto mount.
    Nov 25 20:31:22 tower kernel: sd 8:0:0:0: [sdl] Synchronizing SCSI cache
    Nov 25 20:32:03 tower kernel: usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd
    Nov 25 20:32:33 tower kernel: usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd
    Nov 25 20:33:17 tower kernel: usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd
    Nov 25 20:33:47 tower kernel: usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd
    Nov 25 20:34:31 tower kernel: usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd
    Nov 25 20:35:01 tower kernel: usb 2-5: reset SuperSpeed USB device number 2 using xhci_hcd

    It got stuck here, till i detach; "kernel: hrtimer: interrupt took - ns"

     

    Did I miss something, this should work right?

     

  7. Solution, it requires a couple of commands every time, but al least it is possible to move the files between shares on zfs.

     

    The cache is formatted as zfs and every time a new root directory get created, it becomes a dataset. To move between shares it has to be a folder, which you can manually create by cd to the cache and mkdir.

     

    What I did is;

    1. Move the whole "Downloads" folder to the array.
    2. cd /mnt/cache
    3. mkdir Downloads
    4. Move everything back onto the cache, because is have Downloads set to cache only.

    Note; This still does not work, because when you try to move from downloads to another share which is not yet on the cache, it will automatically create a dataset. So also when moving file you have to, every time, create a folder first, because mover will pull this folder of the cache every time it runs. The Download folder, doesn't have to be recreated, because it stays on the cache.

    1. cd /mnt/cache
    2. mkdir Destination

    Obviously because my Downloads share now is a folder, not a dataset, you cant create snapshots etc. But the whole point of me using zfs on the cache, is to utilize the snapshot capabilities of the appdata, domains and system folders not anything else.

  8. 22 hours ago, JorgeB said:

    I missed that part, that's normal because with zfs a new dataset is created for every share, so that's for all purposes a different filesystem, one workaround if you really want that would be to create the shares manually with mkdir, so no dataset gets created, but of course will lose option to snapshot just that dataset, etc.

    I kind of had a idea it had something to do how zfs works, so it is because of the datasets. Well, thank you for leading me toward a solution with mkdir, its not ideal, but it works easier than every other solution I had.

  9. 8 hours ago, JorgeB said:

    I have no issues moving data from zfs share to share, can you give a quick example of what you are trying to do and the error you get?

     

    I dont get a error, what goes wrong is when I to move a file/folder which is on the zfs cache, I cannot move between shares. It will always copy it, not move. So it will take a long time and doubles the writes to the ssds. Are you sure it moves, not copy?

     

     

    I dont know a good way to make a clear description. But I will try to explain, but its kind of difficult.

    • I have an Array which consists of 6 drives and 2 parity in xfs. The Cache consists of 2 ssds and is a zfs mirror.
    • On the array I have a share "Media", which uses cache as primary storage and Array as secondary. Mover moves from cache to array.
    • On the array I have a share "Documents", which uses cache as primary storage and Array as secondary. Mover moves from cache to array.
    • On the cache I have a share "Downloads", which is cache only.

    Lets say, I want to do is move something from Documents to Media.

    1. I open two windows and open: "//tower/Documents/" and "//tower/Media/" and move the file.
    2. This does not really work because it does not move the file. Because it does copy and delete, so this results in double the writes to the ssd and it takes a long time.
    3. Solution to this is to use a rootshare, so I open "//tower/rootshare/Documents/" and "//tower/rootshare/Media/"
    4. Now moving works, and it is instantaneously and without extra writes.

    Now, I want to move a file from Downloads to Media.

    1. Again I open two windows and open: "//tower/Downloads/" and "//tower/Media/" and move the file. This copies and not move the file.
    2. Now I As previous we should use the rootshare, so I open "//tower/rootshare/Downloads/" and "//tower/rootshare/Media/"
    3. Here is where the issue comes, Windows would still copy the files not move and Dolphin in Linux refuses to do it "Could not rename file smb://path".

     

    Before I had a btrfs cache, everything worked as I described in "Documents to Media" and didnt matter if it involved the cache or not. But since I have the zfs cache, I cannot move anything on the cache because it will always copy.

     

  10. Hello everyone,

     

    I have been using unraid for quite some time and with the recent release of 6.12.2, I decided to switch my cache pool to zfs. So far, I like it and the zfs master plugin is making and restoring snapshots really easy. I did convert my btrfs cache to zfs by following along with the the two most recent videos of SpaceinvaderOne.

     

    However after doing this I have one problem; I'm unable to move files from one share to another share, (and just let mover put it on the array once a month) when the file I want to move exists on the zfs cache pool. Before converting to zfs, I could do this without any problems on 6.12.2, by using a rootshare.

     

    So what happens is; I have a downloads share which is cache only, obviously my downloads go into this share. So, I downloaded the new linux iso and want to move it to the isos share, but now because this file is in the zfs cache pool I cant move it there anymore.

     

    Another example; lets say I have a document on my pc and copy it to the photos share. Because my photos share uses cache, I get a new photos folder on the cache with this file, placed in zfs cache pool. But unfortunately stupid me, I put the file in the wrong folder. Lets move it from the photos folder to the documents folder using the rootshare, like I always did. Well that does not work anymore. After letting mover put it on the array, the photos folder on the cache disappears (because everything has been put onto the array) and my file is now on disk1. Now I can move this file, but obviously now my file is not on the drive it supposed to be, because my document share is on disk2.

     

    I cant possibly be the only person encountering this, I described two, I assume very common things everyone does. Previously the solution to move between shares was to use a rootshare, if that doesn't work with zfs, it is definitely mentioned somewhere, but I cant find anything about it except my own topic on the forums. Probably because the search term zfs leads to many many other things.

     

    As a question, how do I move a file on zfs to another share? also doing this without accessing unraid.

    Thx

     

     

    • Copying to another share does work, but then you have to manually remove the original. Also copying takes time and unnecessarily doubles the writes on the cache ssd.
    • Also moving a file from the array to anywhere, including a cache only share works.
    • Moving a file existing on the zfs cache cannot be moved between shares. This does not work with a rootshare and also not with krusader.
  11. I thought both ware cache drives where on the motherboard, but i just checked;

    1 cache drive using the motherboard sata amd the other one is connected to LSI9211

     

    The disk reported is just the disk is tries to write to, with the only consistent being the cache.

    Im sure the cache is messed up, it now reports 2TB (it is 1tb)

     

    anyways i need to figure out how i can copy everything for the cache to an external or something

     

    edit: Ok, the cache drive ending on 208 is definitely fucked. but I think I can safe most of the data for the other drive. Unfortunately it takes quiet some time because it about 500gb.

     

     

     

     

     

    EDIT

    UPDATE

     

    So far the issue is solved, what i have done is. After discovering the cache is the problem, making it crash every time something got written or read form it. I made a new usb, to start from fresh. Put one of the original cache disks as an array disk (btrfs) and tries to read the data of. The first disk did immediately crash again, but i could pull all the files from the second disk.

    So, basically it reinstalled everything the way it was before. I had backups of the dockers and a document with all the changes I made in the past. It took about 2 hours to set back everything to how it was before. I checked the latest files i copied for the cache and all files seam to be unharmed by this situation.

     

    Conclusion I don't think it was necessary to start for a fresh install, but it didn't take to much time and everything work as it supposed to.

     

     

  12. I have the parity disks removed from the array, otherwise I need to cancel the parity check every time. And we can also exclude it have anything to do with generating parity when moving to the array.

     

    12:38 turn on syslog and reboot

    12:41 start array

    12:43 download something to cache only folder using a docker

    12:45 Crashed and automatic reboot

    12:48 start array

    12:51 start mover

    12:51 Crashed and automatic reboot

    12:55 generate "diagnostics1", disable docker and reboot

    12:58 start array (docker and vms are disabled)

    12:00 start mover

    13:02 Crashed and automatic reboot

    13:05 generate "diagnostics2"

    turn off syslog and get the syslog file

     

    Oke, so the syslog contains 3 crashes;

    - At the time of the first crash, there is nothing in the syslog.

    - At the second crash, also nothing

    - At the third crash, a bunch of BTRFS errors. There is al least something going on with the cache, but could have been caused by the very frequent crashes.

     

  13. On what the issue could be, it can complete a parity sync without any issues. I would think temperature is good and also power is good, because during the parity check there more cpu utilization and all disks are doing something ofc requiring more power. I don't have an extra psu or any spares actually, so I cant really change out parts to try something.

     

    The syslog that i posted should contain two crashes. Anyways I will make a new one and this time writing down the time of events, give me like an hour.

  14. I had no solution or any clue on what the issue could be, so I made a fresh usb 6.9.2.

    Quickly setup my configuration, shares, ect.  and it crashes.

     

    So, i have a fresh unraid install and having thesame issue as before. To me, that points to a hardware issue, what could to it.

     I removed the other files, these are the new diagnostics and syslog.

     

    I'm not sure of the time of the first crash, second one was on 02:20

     

     

     

  15. I also tough it could be the ram, so yes I have run memtest. With single sticks and both together, resulting in no errors after 8 passes in each configuration. Also the server can complete a parity check without any issues, if it would have been the memory is probably shouldn't be able to do that because with mover (or another method moving form cache to array) it crashes every time within a minute.

     

    The only weird line in the syslog is line 169, this is also close to the crash. But doesn't show anything because its also there when it doesn't crash.

    "ntpd[1758]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized"

    idk but /Settings/DateTime shows the correct time

     

     

  16. Hi all,

     

    The server has an problem, it crashes every time within a short time after running mover.  I have been using this system with 6.9.2 from release and it worked fine before and I have already done the following;

    - parity check

    - docker safe permissions

    - fix common problems

    - disabled VMs

    - disabled Dockers

    - mover, unbalance, krusader

    - memtest86, no issues on a couple of passes

     

    With Vms and Dockers disabled it still crashed every time within a minute of invoking mover.

     

    I hope you guys have a idea what the issue could be

    Anyways thanks for all the help

    ZPx

     

     

    Updated: https://forums.unraid.net/topic/110753-692-mover-crashes-server/?tab=comments#comment-1010818

     

     

     

  17. 9 hours ago, binhex said:

    woke up after a night of dreaming about code (happens a lot to me!) and did one last test and i can confirm i see no leak on my end. i ran a jackett instance with no proxy or netowrk routing to see what a leak would look like, then ran the same add and search query on the secured jackett and no leak whatsoever, so im happy any dns leaks are coming from the pc running the browser, which IS still weird!, sounds like a bug in the jackett ui to me!.

     

    I set pihole as dns on my pc and the domain requests are indeed coming form my pc and not unraid.

    When you click the list of indexers in the jackett webui, I copied all of those and made it into a blocklist. (not reliable, I know, just to test something out) Testing it again all the domains from jackett ui get a blocked status in pihole. Testing the indexers still work successfully with the domain blocked, indicating jackett is using the vpn. also pointing to the pc making the request is completely unnecessary.

    • Like 1
  18.  

    1 hour ago, binhex said:

     

    hmm that is interesting", i wonder if a plugin or something in firefox is messing with the connection to jackett somehow?! its pretty weird!.

     

    well i have some good news and some bad news, the good news is i cannot replicate the dns leak, i am using tcpdump running directly on my host and i see no sign of any dns queries yet when network binding jackett to privoxypn, the bad news is i cant replicate your issue and therefore have no idea whats causing it!!.

     

    perhaps try resetting firefox, or run it in safe mode, this may tell you whether its a plugin/addon on in or not.

     

    i think im going to go to bed and think about this overnight, its very late here!.

     

    Quite late here as well, goodnight. One thing to note, it is not only with firefox.

     

    At first i thought i misconfigured something, but you saw those settings and I not doing something stupid in the config. I'm also not doing something really advanced, just pihole. There are lots of people using it, someone would have noticed if its a big issue. Thats why I suspected the browser at first, and why I tried some different browsers and on different devices. I a bit out of things to try for now, maybe I can come up with something tomorrow.

     

    47 minutes ago, binhex said:

    @ZekerPixels i couldnt let it go without one more dig into this and i think i know whats going on!.

     

    if you look at your pihole logs carefully you should notice that all dns queries are coming from your pc, they are NOT coming from the container, if you click on "add" in the jackett ui and add an index site then the lookup (bluebird-hd.org being one) is done on the host running the browser, NOT on the jackett instance, this is surprising to me but its def the case, i can replicate it here.

     

    however, if you click on manual search (or 'test all') and do a search in jackett ui then there is no dns query leak from your pc, it is all done in jackett, give it a test and let me know, i will check back in the morning, but i am pretty happy now that there is no dns leak from the container.

     

    The first time i noticed the domain request i was playing with sonarr etc. and though hey that's not right. I can however not directly see the source of the request in pihole, because all have 192.168.1.1 (the router) as client. So, I see no difference in pihole between the server or pc. I think i can get it that way without to much hassle, will try I tomorrow. Its a bit weird the browser pings all those domains.

     

    I had the webui still open, I just checked, clicking on "test all" and it pops up in pihole as well as using "manual search". So, that would still originate form the pc. I set pihole as dns on my pc and check the origin.

     

    I'm sorry keeping you up at night (never thought I would say that) worrying about your container.

    Thanks for support and effort making sure there are no issues with the container itself.

     

     

     

     

  19. 19 minutes ago, mbc0 said:

    🤣Thanks Mate Password now changed! 😄 

     

    It was remove fairly quickly, but i could imagine using a bit more difficult password. It sounded a bit like the default welcome123 password company admins tend to use.

     

    My jackett log ends with the following, its a fresh install with everything on default settings.

    Hosting environment: Production
    Content root path: /usr/lib/jackett/Content
    Now listening on: http://[::]:9117
    Application started. Press Ctrl+C to shut down.

    I think the proxy settings binhex referred to are in the webui

     

    @binhex previous post

    I discovered something, I mentioned using firefox on my pc to access the jackett webui and the domains show up in pihole. On thesame pc I used a differend broser to access the jackett webui, chrome and edge, now it doesnt show up in pihole.

    Interesting, doing thesame on my phone using chrome and it does show up in pihole. I fired up a vm, trying chrome, it shows up i pihole.