Jump to content

rramstad

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by rramstad

  1. Just now, tshorts said:

    Thanks. The cables yes, good idea. I'll replace out the SATA cables, if that doesn't help I'll remove one RAM at a time and see if it stops. Hope for the best. 

    Weird that the files work fine, and the errors come totally random, can seed for a year and suddenly 99.9%, and same torrent can happen multiple times, wonder if transmission perhaps only try receiving a segment from HDD once, the requested segment gets corrupted in a bad cable and decides it's corrupted. While the Parity check will do multiple requests if a segment is seen as corrupted. 

    I'll try Verifying a bunch of none-errored torrents. 

     

    I'd recommend taking the server offline and using something like a bootable USB stick to run Memtest or similar overnight.

     

    That's a really good way to figure out if you have a memory problem or not.

     

    No, you misunderstand parity.  If data that is incorrect is written to the array, the array doesn't know it's incorrect.  Parity will report it as matching what is expected -- it's just not what the torrent sent.

     

    Parity can only help recover errors i.e. when something was written as X and becomes Y.  It can't detect any problems if X was written and really it was supposed to be A in the first place.

     

    Reseat your RAM sticks for sure before doing anything else.  SATA cables are super cheap so replace them all.

     

    I've also seen weird stuff when a power supply is dying OR when the PS was on the edge anyway and the hard drives get old, they draw a bit more power, and can cause problems.

     

    Honestly I've seen it all, bad motherboard in this situation, bad ethernet...

     

     

  2. 1 hour ago, tshorts said:

    You mean "Verify"? I did in the beginning, but not anymore, takes to long time. I did one now that was on "error" and not on stopped and it gets an "corrupted segment". 

     

    Unraids Parity-Check have never found any errors though. Last one done a week ago. 

    Thanks for the response. Narrowed it down a bit. I'm lost at what the cause can be though..
    I don't use cache for torrents so can't be that disk. Do it go through RAM when it looks for a file, and a RAM might be bad maybe? 

     

     

    Everything goes through RAM both ways -- nothing actually ever goes directly from the disk to the CPU, it's always going through RAM.

     

    Corrupted segment means something is wrong with the file as it sits on the disk.

     

    Parity check passing means that it's written incorrectly on the disk as far as the torrent data is concerned.

     

    I'm guessing a bad cable, or bad RAM.  I've seen a bad cable give similar issues.

    • Upvote 1
  3. 3 minutes ago, tshorts said:

    I have an issue for the last 2 years or so:

    Seeding torrents (100% downloaded) get set to 99.9% at random and get set as Stopped or Error - and I have to restart them manually. The files integrity doesn't seem to be affected. 

     

    I have around 500 seeding torrents, from different trackers, oldest 2 years oldest, newest 1 day old. None of that seem to affect if a torrent get set to 99.9% or not. 

    It's been the same for more than a year (perhaps since first use but took me a while to notice), so it's not just a bad version. 

    It's the only torrent-client I've used on unraid because Transmission Remote GUI is so wonderful, so I don't know if it's a torrent<->unraid problem, or a Transmission problem. 

     

    1. What is the cause of this? Is it a know issue?
    2. Is there a fix? 

    3. Is there a workaround, aka making Transmission try to automatically restart all Stopped and Errored torrents once a hour or so?

     

     

    Are you doing a forced re-check of the 99.9% torrent before restarting it?

     

    Honestly, this sounds like failing hardware to me.  To put it in perspective, I have three instances of Transmission running on my system, roughly 6,000 torrents between them, and I've never had what you describe happen, ever.

     

    • Like 1
  4. 27 minutes ago, mafkees1233 said:

    Hello,

     

    I installed transmission docker container on my unraid server but I can't connect to the gui. I put 192.168.*.* on whitelist but still can't connect. Can someone please point me in the right direction? Thank you.

     

    Are you trying to connect from within the Unraid web UI?  Docker, find the container, right click, Web UI.

     

    Try removing the whitelist for now to see if that helps.

     

    If it isn't working, post the URL that is displayed in the browser tab.

  5. From my end, the key is that you can connect to Transmission on the LAN but you cannot connect to it through the VPN.

     

    This means -- assuming you have the settings correct -- that the whitelist isn't the problem.

     

    Are you sure that the VPN access appears as if it is from 10.8.0.3 when it is on your network?  That strikes me as slightly unusual.

     

    Are you sure that the VPN is properly forwarding accesses to the correct local IP for Transmission?

     

  6. 4 hours ago, BreakfastPurrito said:

    WHITELIST: 192.168.1.207,192.168.1.205,192.168.1.211,10.8.0.3

     

    The first 3 work, the last one does not.

     

    Well, given what you've said, my best guess is that your port forwarding isn't set up properly with the VPN.  Either that or the service that is connecting inbound is not connecting to the correct internal IP.

     

    I can almost guarantee that this isn't a Transmission problem, that it has something to do with Wireguard and/or the details of the way the VPN is set up.

  7. 5 minutes ago, BreakfastPurrito said:

    Isn't that for if you want transmission to use a VPN? I want to tunnel into my network and just checkup on my downloads.

     

    OK, wasn't clear what you were trying to do. Best guess is you have the syntax wrong for the whitelist, or you have something going on with your firewall.  Does your VPN properly route inbound traffic to other services?  That's going to be a good way to figure out if this is a Transmission configuration problem, or if you have a broader problem.

  8. 13 minutes ago, BreakfastPurrito said:

    I have set up transmission to use an IP whitelist and with internal IP addresses it works fine; 192.168.x.x etc. However when I connect with a VPN it won't let me access it.  Wireguard gives me a 10.0.x.x address, and if I add that to the whitelist it still blocks me. Is this a known issue or am I doing something wrong?

     

    Have you looked into the transmission version that is specifically modified to support VPN?

     

    It has good documentation and should be easier to configure.

     

  9. WHITELIST is the list of IP addresses that can access the Web UI.  You want to set this to your LAN, plus possibly 127.0.0.1

     

    PEERPORT is the port that is advertised to the external folks who are attempting to connect to your Transmission.  It's there to allow for funky setups where you might want to advertise port X on your exterior interface and map that internally to a different port when you are port forwarding.  Also useful if using some sort of VPN.  If you have no idea what to set this to, set it to the port that you are using for inbound Transmission traffic

     

    I did not need HOST_WHITELIST which is similar to the WHITELIST above but uses host names, so I clicked REMOVE next to that variable.

     

    Having them blank -- here or in the settings.json -- will not work.

     

    • Like 1
  10. For the curious who might have read my last two posts and wondered if there are any updates...

     

    I ended up changing two of the three docker images from Bridge mode to use a fixed new static IP on my network, and allowed them to run on the native docker ports but with unique peerport settings.  I adjusted my port forwarding and the dockers work correctly, allowing inbound client traffic.

     

    My best guess is that the new docker images got confused when they were created, and in bridge mode no longer understand what the base port is that the image wants to listen on.  The result is that inbound connections to the copied dockers fail.

     

    It's possible that if I had blown away all the transmission dockers, taking notes on configuration details, and reinstalled transmission, and then possibly cloning the default configuration, and then adjusting each clone separately, that I could have gotten things working but frankly I wanted a low effort solution.

     

    I am not all that conversant with docker but that's my best guess -- I installed the first docker, made adjustments to it, got it the way I wanted, and then made two copies, and could never get the copies to work right until I switched to using them with a static IP.

     

    Hope this helps someone.

     

  11. Posting with one other strange thing I noticed.

     

    On the console of the one that works:

     

    root@fede8952ebd5:/# netstat -ntlp | grep LISTEN
    tcp        0      0 0.0.0.0:9091            0.0.0.0:*               LISTEN      -
    tcp        0      0 0.0.0.0:52331           0.0.0.0:*               LISTEN      -
    tcp        0      0 :::52331                :::*                    LISTEN      -
     

    The one that is broken, same command:

     

    root@6d8cbcaaa6aa:/# netstat -ntlp | grep LISTEN
    tcp        0      0 0.0.0.0:9091            0.0.0.0:*               LISTEN      -
    tcp        0      0 0.0.0.0:52329           0.0.0.0:*               LISTEN      -
    tcp        0      0 :::52329                :::*                    LISTEN      -
     

    This seems truly odd.  Within the docker itself, it should still be listening to the same port 52331, right?  The bridge should take care of mapping the external 52329 to the internal 52331.  (Just like this second example has a web UI available at 9092 which works, and the bridge is mapping the external request of 9092 to the internal 9091, which is what it's listening to.)

     

    Again, freely and openly admit that I've never tried running multiple instances of the same docker, but my expectation would have been for these two instances to be listening to the same ports, and the bridge would have been taking care of the mappings.

     

  12. I'm trying to run multiple instances of this docker on Unraid.

     

    In general I'm having reasonable success but the initial instance is the only one working properly.

     

    The issue is that the second and third instance are showing as not being accessible from the outside, but after double and triple checking port forwarding, it seems to be a problem with the docker itself somehow.

     

    Here's what I mean.  On a console, I see this output

     

    netstat -ntlp | grep LISTEN

    (snip)

    tcp        0      0 0.0.0.0:52333           0.0.0.0:*               LISTEN      9025/docker-proxy   
    tcp        0      0 0.0.0.0:52329           0.0.0.0:*               LISTEN      3551/docker-proxy   
    tcp        0      0 0.0.0.0:52331           0.0.0.0:*               LISTEN      6311/docker-proxy   

    (snip)
    tcp        0      0 0.0.0.0:9093            0.0.0.0:*               LISTEN      9076/docker-proxy   
    tcp        0      0 0.0.0.0:9092            0.0.0.0:*               LISTEN      3598/docker-proxy   
    tcp        0      0 0.0.0.0:9091            0.0.0.0:*               LISTEN      6348/docker-proxy   
    (snip -- including IPv6 entries for these)

     

    The three dockers have 9091, 9092, 9093 for the web UI and rpc.  They have 52331, 52329 and 52333 for BT traffic.

     

    On my LAN, I can access the web UI for each of the three -- and make transmission-remote calls using 9091, 9092 and 9093.

     

    The first one has client traffic accessible from outside the LAN just fine i.e. it indicates when I look at the settings that port is open, and I can use external testing tools to confirm the port is open, plus when I visit tracker web sites they show that I am reachable.

     

    The other two are not accessible from outside the LAN i.e. inbound traffic doesn't work.

     

    Now, here's the kicker -- I can see above that the dockers should be getting the BT traffic, i.e. on the Unraid system, those parts are showing as being open, so on a whim, I plugged in

     

    http://10.50.0.250:52331

     

    into a browser on my LAN, and I get a response -- it's a series of responses saying it's not a valid response.  That's the one that is accessible from the outside.  So, that's the expected response.

     

    If I plug in one of the ones that doesn't work

     

    http://10.50.0.250:52329

     

    I get a "refused to connect" error -- so the docker seems to be refusing the connection, or somehow the port isn't being properly forwarded.

     

    I am at a loss.  This test seems to indicate that whatever is going on is local i.e. the port isn't being properly forwarded from docker.

     

    Here's the stuff from starting the one that works:

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='transmission' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="unraid" -e HOST_CONTAINERNAME="transmission" -e 'TRANSMISSION_WEB_HOME'='/flood-for-transmission/' -e 'PEERPORT'='52331' -e 'WHITELIST'='10.55.0.*,127.0.0.1' -e 'PUID'='99' -e 'PGID'='100' -e 'UMASK'='022' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:9091]' -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/transmission-logo.png' -p '9091:9091/tcp' -p '52331:52331/tcp' -p '52331:52331/udp' -v '/mnt/user/tunes_work/BTDownloads':'/downloads':'rw' -v '/mnt/user/tunes_work/BTWatch/':'/watch':'rw' -v '/mnt/user/easystore':'/easystore':'ro' -v '/mnt/user/home':'/mnt/home':'ro' -v '/mnt/user/appdata/transmission':'/config':'rw' 'lscr.io/linuxserver/transmission'

     

    and it shows these mappings for the bridge

     

    172.17.0.3:52331/TCP10.55.0.250:52331
    172.17.0.3:52331/UDP10.55.0.250:52331
    172.17.0.3:9091/TCP10.55.0.250:9091

     

    Here's the stuff from starting one of the ones that doesn't work:

     

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='transmission_Dime' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="unraid" -e HOST_CONTAINERNAME="transmission_Dime" -e 'TRANSMISSION_WEB_HOME'='/flood-for-transmission/' -e 'PEERPORT'='52329' -e 'WHITELIST'='10.55.0.*,127.0.0.1' -e 'PUID'='99' -e 'PGID'='100' -e 'UMASK'='022' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:9091]' -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/transmission-logo.png' -p '9092:9091/tcp' -p '52329:52331/tcp' -p '52329:52331/udp' -v '/mnt/user/tunes_work/BTDownloads':'/downloads':'rw' -v '/mnt/user/tunes_work/BTWatch/':'/watch':'rw' -v '/mnt/user/easystore':'/easystore':'ro' -v '/mnt/user/home':'/mnt/home':'ro' -v '/mnt/user/appdata/transmission_Dime':'/config':'rw' 'lscr.io/linuxserver/transmission'
     

    and these mappings

     

    172.17.0.4:52331/TCP10.55.0.250:52329
    172.17.0.4:52331/UDP10.55.0.250:52329
    172.17.0.4:9091/TCP10.55.0.250:9092

     

    I am running today's version of this docker.  I updated everything when I started having problems.

     

    Any thoughts?  I'm at a loss.  Admittedly I've never tried to do this before, run multiple copies of the same docker.

     

     

     

     

     

  13. Random other data...  the motherboard, CPU, memory, network card were all being used for years with CentOS 8 as a NAS.

     

    I suppose there's always the possibility of hardware failure but I didn't even take them out of the case...  I removed the old drives, put in some new drives for Unraid, and went from there...  I mention this as I did read some threads and there were suggestions of running memtest or other hardware diagnostics, I haven't done that, but I'd be pretty surprised if this was a hardware issue.

  14. Hi, I'm in the evaluation phase of Unraid and what I see so far is positive but I'm having some really troubling issues right now.

     

    From Windows 10 when I try to bring over moderate amounts of data by copying folders from my Unraid server to a local flash drive, it happens with great frequency that the copy goes to 0 B/s then the SMB shares are dropped by Windows and the WebGUI stops responding -- it almost seems like the network stops working.

     

    I have had other systems on this network for years for NAS and had no issues -- this seems to be specific to Unraid.

     

    When I log in manually on the console as root, I can reboot the system just fine, but then it starts a parity check (which typically takes 16 hours to complete).

     

    I have rebooted the Windows 10 side of things and that didn't help, I can crash Unraid quickly and easily just by going to one of the (now working) SMB shares, trying to copy 100 GB or so, and Unraid will go back into that same state.

     

    I'm really baffled.  Any help most appreciated.  Never seen this with a NAS on my LAN before.

     

    Diagnostics attached.

     

    unraid-diagnostics-20220206-1621.zip

  15. I am extremely new.

     

    I had a similar issue where a cache drive filled to 100% and all my shares disappeared.

     

    The advice I got here was to reboot and I did so and they came back.

     

    YMMV -- I am still in the trial period, don't know anything about Unraid.

     

    If you search for my posts, you'll see the thread in question, and the response from the mods.

     

    I too was terrified that I had lost all my data.

     

  16. Hi there, hoping someone can help with their opinion as to the best way to proceed.

     

    I have a lot of music that is in odd formats like SHN.  I often have to decode to WAV and encode to FLAC so I can have files that I can tag and use in my music players.

     

    In the past my file server was running CentOS and I would just use a user level command line (bash shell) and some scripts plus some installed utilities.

     

    That file server will be replaced by Unraid and I need to have similar functionality.

     

    I see three possibilities and I wonder which is best, or if I'm missing something.

     

    1) Do these tasks directly from the Unraid shell.  Not sure how to install those scripts and utilities, and don't really want to do it as root, so ideally someone would tell me how to install that stuff, have it persist and how to get a user shell on my Windows box via the GUI.

     

    2) Use Docker with something like the Rocky Linux image.  I actually grabbed this and played with it a tiny bit but it just runs and exits.  From reading I realize I need to tell it somehow that I want it to install some scripts and give me a user shell, but am not sure how.

     

    3) Create a Rocky Linux VM that I can spin up when I need it.  I did CentOS sysadmin tasks for years so this would be comfortable for me I think but again am not entirely sure of the steps to get that working on my box.

     

    Can anyone provide instructions, directions, documentation on getting a safe customized user level shell running on Unraid via one of these methods or some other method?  (add functionality to Unraid, Docker, VM, something else? maybe a plugin but I didn't see any relevant ones?)

     

    Thanks in advance, this is one of the bigger things I have to solve before I can shut down my old box and pay for Unraid.

     

  17. 1 hour ago, JonathanM said:

    You also need to set minimum free space for the shares and pools to a reasonable figure so Unraid can appropriately allocate between disks and pools and not run completely out of space on any single volume. Such is the downside of individual filesystems that allow differential spindowns and isolated drive recovery as opposed to traditional RAID.

     

    Could you tell me where to find these settings and what typical values might be?

     

    I follow what you are saying and I understand the overall concepts.  I did restart the system -- having shut it down in frustration -- and after starting the array again, I see the user shares, and there was no data loss, though this has been terrifying.

     

    Big picture, as someone with a lot of experience, it feels to me like maybe the Quick Start materials are possibly oversimplified, or maybe some defaults aren't appropriate i.e. seems like having minimum free space of zero is not a good thing, and yet it seems that was the default.

     

    Also I suspect my use case -- getting my Unraid box up and immediately putting a few TB of data on it -- is probably pretty common, and if it had been suggested to start with cache: no, I would have done that, and probably avoided all this mishegoss.

     

    Thank you for the help, and in advance for making a stab at a reasonable free value.  5% or 10% of the size of the pool, or maybe something like 5x the biggest normal file?  What's reasonable?

  18. Hi, this is really odd, but makes me very paranoid about going forward with unraid.

     

    I started setting up unraid 6.9.2 today.

     

    I have a pair of SSD that are being used as a cache pool, and two large hard drives.

     

    The system was configured with a few default user shares, and I left them alone.

     

    I had no trouble starting the parity check -- it was probably around 12% when things got weird -- and no problems formatting the one non parity hard drive and getting the cache pool up and running.

     

    I started transferring data from my existing NAS via rsync.  I had set that share to cache = prefer.  After some time, the cache drives filled, and I realized this was probably not the best setting for the initial data transfer.  I changed cache = no and continued copying.  The rsync resumed where it had stopped, kind of neat, as unraid was presenting the share with the data merged together as expected.

     

    Anyhow I started moving some other data to another user share, and with the first share, I mounted it successfully from other systems and was playing some music from that share.

     

    After thinking about it a little bit, I realized with the cache full that this was not optimal, that data really should probably be flushed to the disk, creating the potential for the cache to be reused properly.  (I'm not sure if this was necessary or not, but that was my thinking, as the cache had 128 GB used out of 128 GB.)  So in the settings I found the way to run the mover manually, and I clicked that button.

     

    Nothing happened, which seemed weird.  The usage of cache didn't change.   I then thought about it for a second and my initial guess was that because I had changed from cache = prefer to cache = no that the mover didn't know what to do.  So I changed it back to cache = prefer, and ran the mover again.

     

    At that moment, all the shares disappeared, along with all my data.  In terminal, I get this:

     

    root@unraid:~# ls /mnt/user
    /bin/ls: cannot access '/mnt/user': Transport endpoint is not connected
     

    and in the GUI there are no user shares at all -- not even the pre-made ones.

     

    I am at a loss.  Thoughts?  From my end I am just floored that running the mover a couple of times after changing the cache settings could result in all of the user shares losing all their data.

     

    I would not believe it if I didn't see it myself...

×
×
  • Create New...