-
Posts
789 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Cessquill
-
-
11 minutes ago, ProphetSe7en said:
Thanks. That covers one part of want I want. Still need to figure out how to stop/pause all torrents and then restart them after mover is done
Without delving into whether it's possible to interact with rtorrent at that level, can't you just stop and start the docker with "docker stop <name-of-container>" and "docker start <name-of-container>" commands?
-
33 minutes ago, Marzel said:
ok, than i need to find out how the "Custom Nginx Configuration" in Nginx Proxy Manager work. Haven't had to use it yet.
For this, I have...
proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 16384m; client_max_body_size 0;
...in that field and it's working fine. Can't remember much more about it though since I set it up once and haven't touched it since.
-
On 3/14/2021 at 3:22 AM, AgentXXL said:
To resolve the DNS rebinding issue I went into my firewall config (pfSense) and under DNS Resolver I added the unraid.net domain to the 'Domain Overrides' section. One thing I'm not sure about is where pfSense asks me to provide the DNS 'Lookup Server IP Address' so I just set it to a Cloudflare one for now, as shown on the attached pic. Cloudflare resolves unraid.net so I suspect I'm correct.
@AgentXXL - from a Spaceinvaderone video, in pfSense go to Services, DNS Resolver and in the custom options at the bottom enter
server: private-domain: "unraid.net"
(think that relates to your issue here)
EDIT: Ignore me, I see that you've been given the same advice already.
- 2
-
1 minute ago, Squid said:
Was talking about the zooming, not about how it looks
Sorry, long day. As you were
-
3 hours ago, Squid said:
Also affects Chrome. I'd suggest use Putty where you can set the terminal font.
Oddly, it looks fine on my Chrome.
-
Updated original post to reflect new structure of SeaChest Utilities zip file
- 3
-
Just now, RockDawg said:
I assume I still have to unassign, reassign and rebuild the drive? And these changes will merely keep it from going off line again?
Yes
-
Just now, RockDawg said:
I tried the ubuntu files just for the info command and it worked. So I am going to try continuing with those.
I'd have thought centos, but not being a Linux guy I'm not sure (or how much difference it makes). I'll update the post when it's clear.
-
13 minutes ago, RockDawg said:
I am stuck on one thing. When I unzip the SeaChestUtilities.zip file and go to /Linux/Lin64/, there are 3 folders in there, no files. The folders are centos-7_aarch64, centos-7_x86_64 and ubuntu-20.04_x86_64. Which do I want?
That's changed since I did it last week. I'm just starting to test, but @TDD or @JorgeB may be more help here
-
41 minutes ago, TDD said:
Thank you for the work bringing this together. There is an easy way to just target the disks you want to modify.
SeaChest_PowerControl_1100_11923_64 -s --onlySeagate
I believe most tools actually allow this -s switch. See screenshot. This allows you to skip the 'map' part and make this easier :-)!
Kev.
Thanks for that - I did see onlySeagate when trawling through the text doc manuals; forgot to go back to it (before I'd got SC working).
-
Just now, JorgeB said:
Nice work, if you don't object I was thinking of moving this to the general guides section or it will likely drop from the 1st page here.
Of course, yes. Partly seeking confirmation I hadn't done something seriously wrong - little bit out of my depth!
-
NOTE: There's a TL;DR section at the end of this post with required steps
People with specific Seagate Ironwolf disks on LSI controllers have been having issues with Unraid 6.9.0 and 6.9.1. Typically when spinning up the drive could drop off the system. Getting it back on would require checking, unassigning, reassigning and rebuilding its contents (about 24 hours). It happened to me three times in a week across two of my four affected drives.
The drive in question is the 8TB Ironwolf ST8000VN004, although 10TB has been mentioned, so it may affect several.
There have been various comments and suggestions over the threads, and it appears that there is a workaround solution. The workaround is reversible, so if an official fix comes along you can revert your settings back. This thread is here to consolidate the great advice given by @TDD, @SimonF, @JorgeB and others to hopefully make it easier for people to follow.
This thread is also here to hopefully provide a central place for those with the same hardware combo to track developments.
NOTE: Carry out these steps at your own risk. Whilst I will list each step I did and it's all possible within Unraid, it's your data. Read through, and only carry anything out if you feel comfortable. I'm far from an expert - I'm just consolidating valuable information scattered - if this is doing more harm than good, or is repeated elsewhere, then close this off.
The solution involves making changes to the settings of the Ironwolf disk. This is done by running some Seagate command line utilities (SeaChest) explained by @TDD here
The changes we will be making are
- Disable EPC
- Disable Low Current Spinup (not confirmed if this is required)
The Seagate utilities refer to disks slightly differently than Unraid, but there is a way to translate one to the other, explained by @SimonF here
I have carried out these steps and it looks to have solved the issue for me. I've therefore listed them below in case it helps anybody. It is nowhere near as long-winded as it looks - I've just listed literally every step.
Note that I am not really a Linux person, so getting the Seagate utilities onto Unraid might look like a right kludge. If there's a better way, let me know. All work is carried out on a Windows machine. I use Notepad to help me prepare commands beforehand, I can construct each command first, then copy and paste it into the terminal.
If you have the option, make these changes before upgrading Unraid...
Part 1: Identify the disk(s) you need to work on
EDIT: See the end of this part for an alternate method of identifying the disks
1. Go down your drives list on the Unraid main tab. Note down the part in brackets next to any relevant disk (eg, sdg, sdaa, sdac, sdad)
2. Open up a Terminal window from the header bar in Unraid
3. Type the following command and press enter. This will give you a list of all drives with their sg and sd referencesg_map
4. Note down the sg reference of each drive you identified in step 1 (eg, sdg=sg6, sdaa=sg26, etc.)
There is a second way to get the disk references which you may prefer. It uses SeaChest, so needs carrying out after Part 2 (below). @TDD explains it in this post here...
Part 2: Get SeaChest onto Unraid
NOTE: I copied SeaChest onto my Flash drive, and then into the tmp folder. There's probably a better way of doing thisEDIT: Since writing this the zip file to download has changed its structure, I've updated the instructions to match the new download.
5. Open your flash drive from Windows (eg \\tower\flash), create a folder called "seachest" and enter it
6. Go to https://www.seagate.com/gb/en/support/software/seachest/ and download "SeaChest Utilities"
7. Open the downloaded zip file and navigate to Linux\Lin64\ubuntu-20.04_x86_64\ (when this guide was written, it was just "Linux\Lin64". The naming of the ubuntu folder may change in future downloads)
8. Copy all files from there to the seachest folder on your flash drive
Now we need to move the seachest folder to /tmp. I used mc, but many will just copy over with a command. The rest of this part takes place in the Terminal window opened in step 2...9. Open Midnight Commander by typing "mc"
10. Using arrows and enter, click the ".." entry on the left side
11. Using arrows and enter, click the "/boot" folder
12. Tab to switch to the right panel, use arrows and enter to click the ".."
13. Using arrows and enter, click the "/tmp" folder
14. Tab back to the left panel and press F6 and enter to move the seachest folder into tmp
15. F10 to exit Midnight CommanderFinally, we need to change to the seachest folder on /tmp and make these utilities executable...
16. Enter the following commands...cd /tmp/seachest
...to change to your new seachest folder, and...
chmod +x SeaChest_*
...to make the files executable.
Part 3: Making the changes to your Seagate drive(s)
EDIT: When this guide was written, there was what looked like a version number at the end of each file, represented by XXXX below. Now each file has "_x86_64-linux-gnu" so where it mentions XXXX you need to replace with that.
This is all done in the Terminal window. The commands here have two things that may be different on your setup - the version of SeaChest downloaded (XXXX) and the drive you're working on (YY). This is where Notepad comes in handy - plan out all required commands first
17. Get the info about a drive...
./SeaChest_Info_XXXX -d /dev/sgYY -i
...in my case (as an example) "SeaChest_Info_150_11923_64 -d /dev/sg6 -i"
You should notice that EPC has "enabled" next to it and Low Current Spinup is enabled
18. Disable EPC...
./SeaChest_PowerControl_XXXX -d /dev/sgYY --EPCfeature disable
...for example "SeaChest_PowerControl_1100_11923_64 -d /dev/sg6 --EPCfeature disable"
19. Repeat step 17 to confirm EPC is now disabled
20. Repeat steps 17-19 for any other disks you need to set21. Disable Low Current Spinup...:
./SeaChest_Configure_XXXX -d /dev/sgYY --lowCurrentSpinup disable
...for example "SeaChest_Configure_1170_11923_64 -d /dev/sg6 --lowCurrentSpinup disable"
It is not possible to check this without rebooting, but if you do not get any errors it's likely to be fine.
22. Repeat step 21 for any other disksYou should now be good to go. Once this was done (took about 15 minutes) I rebooted and then upgraded from 6.8.3 to 6.9.1. It's been fine since when before I would get a drive drop off every few days. Make sure you have a full backup of 6.8.3, and don't make too many system changes for a while in case you need to roll back.
Seachest will be removed when you reboot the system (as it's in /tmp). If you want to retain it on your boot drive, Copy to /tmp instead of moving it. You will need to copy it off /boot to run it each time, as you need to make it executable.
Completely fine if you want to hold off for an official fix. I'm not so sure it will be a software fix though, since it affects these specific drives only. It may be a firmware update for the drive, which may just make similar changes to above.
As an afterthought, looking through these Seagate utilities, it might be possible to write a user script to completely automate this. Another alternative is to boot onto a linux USB and run it outside of Unraid (would be more difficult to identify drives).
***********************************************
TL;DR - Just the Steps
I've had to do this several times myself and wanted somewhere to just get all the commands I'll need...
Get all /dev/sgYY numbers from list (compared to dashboard disk assignments)...
sg_map
Download seachest from https://www.seagate.com/gb/en/support/software/seachest/
Extract and copy seachest folder to /tmp
Change to seachest and make files executable...
cd /tmp/seachest chmod +x SeaChest_*
For each drive you need to change (XXXX is suffix in seachest files, YY is number obtained from above)...
./SeaChest_Info_XXXX -d /dev/sgYY -i ./SeaChest_PowerControl_XXXX -d /dev/sgYY --EPCfeature disable ./SeaChest_Configure_XXXX -d /dev/sgYY --lowCurrentSpinup disable
Repeat first info command at the end to confirm EPC is disabled. Cold boot to make sure all sorted.
- 6
- 13
-
8 hours ago, TDD said:
Try the EPC disable/low current disable per my posts. They are reversible if nothing better happens after reboot.
I've had no issue since.
Kev.
To second this, I tried it last night and it seems to be going well. It was easier than I was expecting.
I'm just creating a General Support post collating the entries spread across the 6.9.0 & 6.9.1 topics and including the resulting step-by-steps that I took if you don't mind.
- 2
-
7 minutes ago, TDD said:
See my earlier post on how I fixed this. I have had no issues since.
Do you mean where you ran seachest to change drive settings? I've got four ST8000VN004 in my array via LSI, and two of them have dropped off, one of them twice (think during spinup).
Would be difficult to run them off the motherboard and just waiting for a rebuild to finish before potentially downgrading.
-
51 minutes ago, JorgeB said:
AFAIK there have only been issues with the ST8000VN004, and only when used with an LSI HBA, not clear so far if it's a general issue with that combo or it only affects some users.
Is there a general topic for this to maybe pool resources? Spent a couple of hours earlier going through my diagnostics and Googling, and it seems to be a fairly common issue with NAS systems and this controller/drive combo (the 10TB drive was also mentioned).
-
2 minutes ago, JorgeB said:
Yes, the LSI SAS/SATA controller, problem wouldn't exist if you could connect them to the onboard Intel SATA ports.
Thanks. Might be able to shuffle internal drives into the hotswap bays and just about do it - I'll have a think.
-
19 minutes ago, JorgeB said:
For now I'm only interest in the ST8000VN004 when connected to an LSI.
Just so I can learn something, is LSI referring to the SAS ports on my motherboard? I have enough SATA ports for the 4 drives in question, but not enough space or molex/SATA power splitters.
And when a drive goes into error state because of this, is rebuilding its contents the only way to get it back into the array?
-
21 minutes ago, Maticks said:
when you rollback it appears the cache drive pulls itself from the pool, it's in unassigned you will need to add it back in.
Just in case you freak out like me.
Thanks. I've got a flash backup here that I should be good to copy over and reboot.
The only thing I'm wary about is that I've changed my USB stick since upgrading - not sure whether it'll still show up as licensed on a previous install.
-
1 hour ago, JorgeB said:
If you have an LSI and 8TB Ironwolf drive(s) (only model ST8000VN004) probably best to stick with v6.8 for now, there have been multiple users with issues that can result in disable disks after upgrading to v6.9, on the other hand if anyone using them upgraded without issues please post back.
As per my support thread I've got 4 of the ST8000VN004 drives and three random drop-outs. I'm going to roll back to 6.8.3 for now though as I could do with a few days without unprotected array anxiety!
-
Ahh, thank you. I had seen that in the upgrade thread and in the back of my mind I wondered whether it applied to me. Don't have any other way of connecting them at the moment, so I'll look to roll back once this rebuild finishes.
Tbank you
-
Hi - this has now happened three times, but earlier I wasn't sure whether it was because my Flash drive was full up. Drive 1 and 24 went into error state separately, and after a flash replacement, disk 24 has just done it again. Disks 1 & 24 are the newest drives in my array, and running a check on them has returned no errors. Unassigned, start, stop, reassign and rebuild returned to normal for a day or so.
All 24 data drives are connected to a Supermicro backplane via a reverse breakout SAS cable from 4 SAS ports on the motherboard. Didn't have a problem on 6.8.3, but not pointing fingers as have replaced fans recently (case rebuild).
Got a diagnostics file this time. Kinda sure drives are OK, but any idea what keeps happening?
-
Thanks @Squid and @jademonkee - Sandisk on order (only because it was quicker delivery).
Mine used to be on the front, and I'm surprised it's lasted this long and not snapped off. Yay for internal USB!
- 1
-
Only minor issue here is that it looks like my 12 year old 1GB flash drive is now too full. Wouldn't upgrade until I cleared some space (after backing up); now Fix Common Problems tells me it's up to 90%.
Sounds trivial, but is there still a recommended hardware for Unraid OS flash drives? Or will anything do?
-
7 hours ago, drugdoctor said:
Sorry,
Where is this pinned post?
At the top of the page In the Deluge support thread
- 1
[Plugin] CA User Scripts
in Plugin Support
Posted
docker pause <name-of-container> / docker unpause <name-of-container> then?