-
Posts
229 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by bobokun
-
-
20 minutes ago, trurl said:
Do you really need its capacity right now? If not, I would just leave it out and go for newer bigger later on as bjp999 says.
Adding a disk just because you have it isn't really a good idea. More disks means more opportunities for problems. If you don't need the space don't add a disk.
In that case should I just leave it as an unassigned disk not on the array and use it to backup important appdata and documents? Would I be able to create a usershare that is on the unassigned disk or is the only way to access the disk directly through /mnt/disk/
-
10 minutes ago, bjp999 said:
Since it is not in the array yet - do a preclear on it. Let's see what the attributes show afterwards.
It's passed two cycles of preclear (results are in the first post)
That's why I was wondering if it was safe to add to my array. I think it should be fine now that the SMART status has changed with no errors
-
It isn't a smart report for a different disk. If you compare the values from the post with the one from the first post all the values are the same except the offline uncorrectable. The notification message I got was that the offline uncorrectable value changed in the SMART status. Not sure what that "undefined undefined " overlay is but its a notification I just noticed and cleared it...Maybe because I was browsing using mobile when I took the screenshot? Not sure. I haven't added this drive to the array yet. So please correct me if I'm wrong. I should add the device to the array first, let it do a parity check 3 times without corrections to the parity. Is the only way to do that is clicking on the check button 3 times (while leaving the write corrections to parity unchecked) or can I run it three times consecutively. Each parity check takes quite a while to do so it might take a couple days to complete.
-
15 minutes ago, bjp999 said:
"Fails" is a strong term. Drives rarely fail. They are not light light bulbs and other electronics, which will just refuse to turn on when they break. Drives tend to (but not always) show some SMART attribute anomalies. If left untreated to they tend to do nastier things like cause read errors, and/or occasionally start reporting invalid data. It is VERY rare that a drive actually "breaks" to the point it won't power up or be recognized by a controller.
As an aside - in some ways unRAID would work much better if drives DID fail like light bulbs. Poof - drive is dead. Replace / rebuild it. It is the act and diagnosis of failing and that causes so many problems!
If you are attentive (and I think plugins like find common problems help with this), you see the SMART problems and act on them, preemptively addressing problems before the drive starts spewing garbage which can will mess up parity and cause even single disk recovery to be imperfect. When I get a problem like yours, I will run a few non-correcting parity checks. If I can get through 3 of them in a row and SMART problems get no worse, I'll trust the drive. But if I run 4-5 and they keep coming, or worse I start getting parity errors, it is time to replace the drive.
Something interesting happened which I did not know could happen. I got a notification for that same drive today which showed the error disappeared in the SMART report.
-
10 hours ago, bjp999 said:
Did this just happen or has it been that way for a long time? If is often the movement in values, rather that the values themselves, that give the most information.
Part of me says watch it and see if it gets worse.
The other part of me says that a 5 year old 2T is too old and too small to be worried about. Buy a couple bigger disks and clean out the small disks. Use the small disks as backups.
I'm not sure when it has been that way. I recently took the 2TB out of my main PC to put it in my unraid server and that's when I noticed the SMART report. I guess i'll leave it as part of my array and if it fails I can always buy another HDD to replace it. All the other HDD are newly bought 4TB which have passed preclear so I don't think the other HDD are prone to fail anytime soon
-
I'm just wondering is this drive safe to add this drive to my array? It passed two cycles of preclear but it has a SMART warning for offline uncorrectable.
The drive in question is an old (4-5years) WD green 2TB drive
############################################################################################################################ # # # unRAID Server Preclear of disk WD-WMAZ20281452 # # Cycle 2 of 2, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read verification: [7:12:58 @ 0 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [6:19:23 @ 87 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read verification: [6:15:04 @ 88 MB/s] SUCCESS # # # # # # # # # # # # # # # ############################################################################################################################ # Cycle elapsed time: 19:49:05 | Total elapsed time: 39:35:45 # ############################################################################################################################ ############################################################################################################################ # # # S.M.A.R.T. Status default # # # # # # ATTRIBUTE INITIAL CYCLE 2 STATUS # # 5-Reallocated_Sector_Ct 0 0 - # # 9-Power_On_Hours 27335 27374 Up 39 # # 194-Temperature_Celsius 30 34 Up 4 # # 196-Reallocated_Event_Count 0 0 - # # 197-Current_Pending_Sector 1 0 Down 1 # # 198-Offline_Uncorrectable 1 1 - # # 199-UDMA_CRC_Error_Count 0 0 - # # # # # # # # # # # ############################################################################################################################ # SMART overall-health self-assessment test result: PASSED # ############################################################################################################################ --> ATTENTION: Please take a look into the SMART report above for drive health issues. --> RESULT: Preclear Finished Successfully!.
-
Do you use ublock? I had a similar issue and once disabling ublock it loaded fine
-
6 hours ago, bonienl said:
Did you start one or more preclear actions?
yes I was in the process of preclearing two drives
-
1 minute ago, Squid said:
That's the problem.
Your pihole.cron file should be something like
1 2 * * * /boot/myscripts/piholescript[code]
Where the 1 2 * * * is a cron expression for whenever to run, followed by a space followed by a path to the script to actually run.
Saving the script itself as a .cron file winds up having cron trying to parse the file which won't work.
To be honest though, I'm not quite sure how that would result in rootfs getting full (I've never tried making the mistake on purpose), and my gut is still @bonienl suggestion of
But, fix the cron thing first so at least the syslog gets cleaned up.
you are going to have to reboot the server though.Thanks for the explanation. I'm still new to using unraid so I didn't realize that the output of pihole.cron is not the script itself. I'll use method 2 instead through userscripts and delete the pihole.cron file. That should fix the cron issue
-
1 minute ago, Squid said:
I think its safe to say that you did it wrong. You should post what you actually did.
Curiously, you also have "uninitialized csrf_token" errors, and I've been wondering how its possible to actually get that error, and perhaps full rootfs is one way.
So there was two ways I did this. I probably only need to do one. The first method I did this was create a file called pihole.cron in the /boot/config/plugins/pihole directory. The second method was to do it through userscripts. Both the scripts in user.scripts plugin and the one pihole.cron have the same script.
-
Yes I got the cronjob script based on what was recommended by @spants in the pihole docker support thread:
-
10 minutes ago, bonienl said:
Something has filled your RAM memory, it says your rootfs (16GB) is 100% used.
Usual culprits are wrong path definitions for dockers.
Not sure why it's only 16GB...I have 32GB of RAM installed
-
17 minutes ago, Squid said:
Before you reboot, do this from the command line
cp /var/log/syslog /boot/syslog.txt
and then post the file.
-
I woke up this morning to login to the webUI and see that all my disks are gone. I tried to go into tools to run diagnostics but it hangs when I click on the tools tab. All dockers seem to work fine and I can even watch videos on plex, and I can SSH into my server. Still have no idea the cause of this problem and haven't restarted my server yet. I've also tried tried to type diagnostics in ssh but it's giving me an error that there is not enough free diskspace. I've tried multiple browsers, cleared cache and it still shows the same results.
DF command gives:
Diagnostics command gives:
-
49 minutes ago, itimpi said:
Not as such, but you could use the User Scripts plugin to schedule a script to run before mover to shutdown the docker, and at another one to later restart the docker (although you would have to guess at how longer mover needs and thus the timing of that one).
Ya, worst case scenario I was planning on doing that. Would the script be as simple as
docker stop rutorrent
and
docker start rutorrent
or do I need to do any checks before stopping/starting it?
-
1 hour ago, spants said:
have you set the clients to use pihole?
2 ways to do this -
1) on your router, set the dns servers to point to the unraid box (or pihole IP address if you have changed it)
2) turn off dhcp on the clients if you have not chosen (1) and manually add the pihole server ip as the dns server address
Figured out what was wrong...it was stupid on my end. I'll post here so others won't make the dumb mistake like I did. In the template it shows that port 53 was assigned twice (Host 1 and Host 2) However I remember reading in this thread saying it's a duplicate and no need for one of those so I stupidly deleted the Host Port 2. Now I realized you need both (One is for UDP and one for TCP)
-
I'm trying to figure out how to reverse proxy my rutorrent docker. This is how my template looks (See below) and I access the GUI through port 82. I'm not sure how to add a base url so how I access it locally is through http:[server IP]:82 When I go through my duckdns/rutorrent it isn't displaying the rutorrent gui properly.
location /rutorrent { include /config/nginx/proxy.conf; proxy_bind $server_addr; proxy_pass http://[IP]:82/; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; }
-
I generally leave my torrents always seeding and it's currently downloading to the cache drive. However when the mover gets invoked everyday it doesn't move those completed downloads from the cache drive to the array because they are still seeding through rutorrent. Is there anyway I can tell mover to stop the rutorrent docker before it gets invoke and start it again once it finishes?
-
On 12/18/2016 at 4:04 AM, bluepr0 said:
thanks for your reply, just tried it but it does the same; docker and web ui it's working but not receiving queries
Have you ever resolved this?? I also have the same issue where my web UI is working and DNS is running but I'm not receiving any queries.
Starting dnsmasq
dnsmasq: started, version 2.77 cachesize 10000
dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
dnsmasq: using nameserver 8.8.4.4#53
dnsmasq: using nameserver 8.8.8.8#53
dnsmasq: read /etc/hosts - 7 addresses
dnsmasq: read /etc/pihole/black.list - 0 addresses
dnsmasq: read /etc/pihole/local.list - 2 addresses
dnsmasq: read /etc/pihole/gravity.list - 442884 addresses -
Bump. Anyone have a similar setup they are running on unraid??
-
I've tried to look up posts in the past but couldn't find too much information on this. I want to be able to ideally recreate my current setup that is running on a raspberry pi.
Currently have a rtorrent/rutorrent running on Debian that downloads into an incomplete folder, once the download is complete it will hard link the files into a complete folder in which sonarr/radarr can read to rename those files. The incomplete folder will still have the original files to continue seeding 24/7.
In order to set this up in unraid would this work if I created two usershares "complete" and "incomplete" and have docker image referencing both shares or will I need to create only one share with two folders in it called complete/incomplete in order for this to work. I hope it's possible to hard link instead of copying the data twice
HDD passes 2 cycles of preclear but SMART warning
in General Support
Posted
Passed another round of preclear and values didn't change thanks for everyone's support!