newerNan

Members
  • Posts

    12
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

newerNan's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Nevermind, physical power button has appeared to triger a safe shutdown. Restarted now, everything looks good and normal. Will kick off a parity check to be safe. Thanks for the assistance.
  2. Thanks for your reply @JorgeB I definitely wouldn't have expected a power issue, its a highend Corsair PSU, running off a UPS, but ok, stranger things have happened. New issue, stopping the array or trying to shutdown appears to do absolutely nothing. The log is 100% full at the moment, perhaps related to it not responding to these actions, but i also can't figure out how to clear that down without a reboot. Without any new log entries to view, i have no idea why its not stopping the array or shutting down. Any advice to get a clean shutdown? Thanks,
  3. Sorry, i forgot to mention, that everything appears to still be working fine. Dockers are fine, remote array access over SMB is fine, but not tested in anger.
  4. Hi, Never had read errors before, dont know what to do. Any help would be greatly appreciated. Unraid is reporting that 8 drives have suddenly encounted read errors. I find that somewhat hard to believe. Some remarks: Unassigned drives is suddenly showing all 8 of these drives with new sd[x] id's with some sort of failed array icon, while they appear to still be in the array. My genuine Unassigned drives arent even showing up any more (maybe a limit to how many the dashboard can display) Only 2 random 8TB drives didnt have any errors, every other array drive (inc. 2xParity) did. 6TB drives all have 1036 Errors, and 8TB+ have 1133, suggesting that these failed reads were across all drives at certain ways through the data. But i think it was fine this morning, so happened today at somepoint. I havent done a partiy check in over a month, so why would all drives be read at the same time? Some drives are connected via an HBA, some direct to motherboard controller. Parity is definitely on motherboard, 6TBs are definitely on HBA, so it doesnt appear to be a controller issue, because affected drives are spread between both. im running 6.10.3 Between 2pm-3pm today, I did attempt to start a years old VM accidently, that failed miserably erroring on all sorts of issues. It wasnt something i expected to work, it would have been attempting to pass through GPUs that didnt exist, and used a drive image (or potentially a physical disk) for the OS that were no longer present. I feel this might have been around the time the read errors happened, but i dont understand why launching a VM would affect the array's data drives. Please help. I want to ensure my data is safe, my offsite backup hasn't been working for about 2 months, have been meaning to sort it out. Tempted to restart the server if it just resolves a one-off issue, and sorts out the unassigned drives plugin, but dont want to lose any diagnostics data. Any assistance appreciated, thanks, harmzserv-diagnostics-20221031-1515.zip
  5. Hi, I'm trying to mount a WD Gold 6TB Drive, that's 4KN. It mounts absolutely fine, formats fine with NTFS,XFS or exFAT, and if i share it, i can see the full 6TB is available and working. However, if I pass it through to a windows VM, it shows as 3 partitions, two of which are unallocated. if i try and create a new partition within windows DSKMGR it just errors. Tried with diskpart too, errors on any action. Is there a special way to do this for 4KN drives, or is this issue nothing to do with 4kn? Im running 6.6.6 with UD: 2019.01.22a I've attached screenshots here, showing details of config. I have another Seagate 2TB drive that mounts and passes through to windows fine. Thanks,
  6. Please could we get this docker updated to qBittorrent v4.1.1, as some trackers have blacklisted all previous v4.x releases. Thanks,
  7. Hi @Frank1940, Thanks for your reply. I've attached a diagnostics dump from right now. As a parity check is already 15% through, i was going to leave it to complete (if it gets that far) for now, rather than stopping it early and rebooting to safe mode. Is it still worth uploading a diagnostics dump after this completes, from it's current normal boot? At the moment, nothing starts automatically from boot, I've always manually started array and each docker container. I agree, I should have given it a week after upgrade, before parity drive change, to ensure it was running ok. At the time, one of my drives had hit 85%, and i was getting anxious. I thought my 10 minute smoke test would have sufficed. Lesson learnt i guess. harmzserv-diagnostics-20170502-2227.zip
  8. Hi All, this is starting to drive me mad, so i'd really appreciate any help you can provide. I'm going to go into detail of issues here, so the short version is: TL;DR: was stable for over a year, never had a single issue. upgraded parity drive and 6.1.9->6.3.3 on same day, now keeps becoming unresponsive in a variety of ways, every 12-48 hours. 24th - 26th April So, my array had 1p+3d 6TB WD Reds in, and I had a brand new 8TB Red drive that was in the server, precleared and ready to go when i got round to it. So on Monday 24th Apr, I did decided to both upgrade my server from 6.1.9 to 6.3.3, and swap out the parity drive. I did the upgrade first, there was a bit of an issue with docker, had to delete and recreate the image, no biggie, but then it all seemed to work ok. Then i swapped out the drive, and the 8TB parity rebuild started. I also decided to add a container, Krusader. I think the parity rebuild was complete at this point, but on the evening of the Weds 26th Apr, my wife said she couldn't access plex while I was away. I tried to VPN on to my openVPN container, it wouldn't connect. i vpn'd on to my router, and tried to browse to the webUI, nothing! i can't remember which, but a couple of containers still seemed to respond, but ultimately, the server was down. When I got home, I connected a monitor, and saw this mess: 13sec video of monitor SlowMo - Poor quality though I couldn't ssh, I tried to hit the power button, nothing happened. Had to force it off. once back on, unproper shutdown detected, parity check...etc 27th/28th April Plex not responding, i try and restart the container, the web gui stops responding, theres nothing I can do, but most other dockers seem fine. i use ssh to powerdown, it says something about a system halt, and then thats it. i cant even ssh anymore, i think only one docker app is now working. i put a keyboard direct, do powerdown again, same thing happens on there, system halt but everythings frozen, hdd's still going full pelt in the parity check, i have to force it off again. 29/30th April I'm abroad, and notice on the 29th that a random core of my 8 cores shows 100% usage. every 30 seconds or so, it switches to a different core, but at any one time, its 100% active on one of them. cadvisor tells me that the process using 92-100% CPU is kswapd. I think maybe it's down to parity check. On the 30th, parity check is over (parity now valid with 1 error!?!?!?) but the cpu usage is still there. I dont know why its happening, the only change i've made that would affect CPU time is the Krusador container, which i dont really need. I try and stop krusader from the dashboard (im abroad and vpn'd to a container btw). instead of the page reloading with it stopped in say 1second, it just loads forever. the whole webui is now not responding, but containers still are. about 5 mins later, it comes back with "Code execution exception" or something like that, with a big red X. I try again a few hours later, no joy. I decide to do it when im back home. 1st May I'm back home, i cannot stop that krusader container, webgui just hangs for 5 mins every tiem and fails. i see something in the logs from a few hours ago about "TCP: request_sock_TCP: Possible SYN flooding on port 8181. Sending cookies. Check SNMP". That's PlexPy's port. I try to stop that, same thing, will not stop. I try to update plexpi, because randomly pretty much every container now has an update. I get some errored out plexpy container in my 'dockers' view and an orphaned plexpy container. cant start it, logo has gone. weird. I decide, that if i can't stop krusader, which may or may not be the culprit of this 100% kswapd issue (i know its a kernal thing, but krusador was my only software change! (other than unraid upgrade, but can't undo that!)), then i need to restart. There i am, with my finally valid parity. i hit stop array, and the bottom right breadcrum trail just hangs forever on "stopping services". nothing works, webgui is not responding, some containers did stop, others haven't yet. I ssh on successfully, copy the syslog to a user share and have a look. it had as far as "spinning up disks" i think. try to "powerdown", doesnt work. tap the power button, doesnt work. I have to force it off AGAIN. Boot it up, start the array, parity check starts. i delete krusader before starting any dockers. i delete both the F-ed up plexpy and the orphaned plexpy image, and re add it. start all containers, everything is looking smooth, no kswapd issues, parity rebuild is running very smoothly and quickly, cpu /ram usage looks good. finally, everything is back to normal. Today (2nd may) last time i checked on the server, parity was on 73%, all good. i'd watched a couple things on plex earlier, everything is running smoothly! about 30-60 mins later, i fire up my tv's plex app, it seems to take a while to connect (all on ethernet), but eventually shows my libraries, on the left, but no thumbnails. i expect a buggy samsung tizen app, so i exit it, and try again. this time, i get some thumbnails but not all. oh well, i've got the one i want. click it, hit play, and it just loads constantly (direct play, not using cpu, usually an instant load). I exit again, and reopen the app, this time it wont connect at all. i try on my phone, nothing. plexpy sends me a notification, "Plex Server down!". i try every other docker and the web ui, nothing. plexpy is the only docker working, and it's complaining that it cant connect to plex. I ssh, it opens with an empty terminal. I exit, reconnect with ssh, i get a login. i issue a syslog copy command to a share, it accepted the command, but never returned anything. it's just a cursor on an empty new line. i check the share, i have a zero-byte syslog file there. try to ssh again, nothing. try again, it works, i try and view log, nothing. try to ssh again, nothing. keyboard to server, login to root, i see the "last login at..." but it doesnt give me a prompt to type anything, it's hung. ssh again, manage to get in, and issue a powerdown, no response. i hit power button, nothing. all shares still working, but that is it. hdd still going mad doing it's parity check. I've had to force it off again! now it's rebuilding, i'm at 10%. everything seems to be running fine atm, but i doubt it will last long. i installed the common problems plugin, it detected a sonar port path mapping that deviated from the template, no biggie, sonar is fine. it also told me i have an ssd with no trim plugin ( i assumed TRIM support was built in to unraid), i've installed it an scheduled it for a couple hours after mover's schedule. I'm running the troubleshooting mode now, awaiting the inevitable! Please let me know anything that i should do to help diagnose the issue, or any dumps i can provide. Please, this is really stressing me out. specs wise: Xeon E3-1230v2 / ASRock Z77Pro3 / hyperx 2x8GB DDR3 1600 / samsung 830 256gb cache and WDRed8TB on mobo's sata3 / 4xWDRed6TB's on mobo's SATA2 / GT210 for head, aerocool ap-pro 450w PSU (pulls 60-65 idle / 100-120 load from the wall) Also, i doubt it's going to be this, but the only known 'issues' i have: cache drive is btrfs (it was default when i installed for some reason, i found out it can have issues late. main desire to upgrade to 6.3.3 was for 'cache prefered' so that i can unload data, reformat to xfs, reload data at some point, but i'll do that when things are stable. parity drive is running a little warm: the 8tb red is in a makeshift stand in a 5.25 bay, with poor cooling. it runs at 42'C normally, idle or with normal load, but parity checks bring this to 44-46'C. it has never gone above 46'C. other HDD's sit on 30-32'C. SSD is 28 idle, 33 during mover, rockets to 44 on heavy write load. I'm currently looking for a new case to accommodate all HDD's with better cooling. Thank you for your time, i know this was a long read
  9. It has been 9 months and 6.2 has finally been released! One can only assume that you guys at Limetech are now considering what will go into the next release. Is it too early to ask if 4kn drive support will be included?
  10. So, i thought i'd just provide an update... After 24 hours, still no joy, but I noticed my HDD light wasn't constantly on anymore, so i assume my preclears had finished. I WAS able to browse to my flash share, and there were new reports in the preclear_reports directory which i could check. now i was ready to turn it off. I pulled the network cable out (theory was, if no external access, not much could be writing to the array). I used pretty much everything in the clean shutdown section here http://lime-technology.com/wiki/index.php/Console#To_cleanly_Stop_the_array_from_the_command_line this stopped samba and sortof unmounted the disk, but no mention of the parity, and the disk was still busy some of the commands in there tried to end processes keeping the disk busy, which semi worked I tried "powerdown", but nothing happened (i didnt have plugin installed, so it sounds like it was hanging on that same business or while copying log file i used the shutdown command which stopped a load of other services then i did the scary holding down of the power button restarted, and YES! everything working normally! hurrahh!! the dashboard says it detected an unclean shutdown, and WILL mean enduring a parity check, but thats fine, i'm away for the weekend anyways, and i'm adding two precleard drives, so (i think?) thats recommended anyway. First thing i did though, installed the powerdown plugin! i wish this was included as a standard build! it's soo good, i just never knew about it. Anyway, i hope this helps someone...
  11. Hi All, I'm fairly new to unRAID, but been researching into it for over 6months and have had my server up and running for about a month now. Everything was going smoothly, I'm running 6.1.9, have a 6TB parity, 1 6TB data and 2 new 6TB's currently preclearing via plugin. Today, i was setting up an ftp server. I used proFTPd plugin, which requires users to be creating using unRAIDs native users. So I made my first one for full access to my FTP user share. No problems there, working great. I then started creating my second user to use a subdirectory. I filled in the name description and passwords, and on clicking the "add" button, POOF! A chrome "site cannot be reached" error. Now the Web GUI just cannot be accessed at all! Something tells me it may have been caused by the password I used it started with a " and this may have raised an exception in the underlying code? Tt also included other special characters. if anyone want's to try it, it was ")5=RVgE8<Cw)]ah Diagnostic checks and notes Doesn't work from any browser or device Doesn't work from hostname or ip address Telnet to port 80 fails I CAN browse to web interfaces of my docker apps on other ports My shares are still functioning, and I can use them normally FTP access using my first user works fine FTP access to the new user I was creating doesn't work conf/passwd file contains my last created (working) ftp user, but not the new one I was creating when this happened conf/smbpasswd - same as above As the passwd file doesn't contain the user I was creating, it doesn't feel like it has caused a permanent problem, and a reboot will do, but my issue is that I'm currently preclearing, and my array is online. So, bearing in mind, I have very little SSH/terminal experience (but willing to try with clear instructions): [*]Is there any way I can kick/restart the web GUI interface / webserver? [*]If not, how can I check my preclears to know when they're done, and view the reports? [*]Once preclears are done, how can I safely stop my array and reboot the server? Worth noting: I was having connectivity issues last night on my phone. I'd get the login prompt, hit "log in", and the page would just try for ages before timing out. For about an half an hour afterwards, it would just keep trying but time out everytime I tried (without login prompts). After an half an hour or so, I'd get the login prompt again, but still no actual webpage, just timeouts. I don't know if this is related, but it seemed to fix itself after about 2 hours. The difference is, this time it isn't just being slow or whatever, it's just not there. The connection is "being refused" instantly when I try to connect, as if nothing is listening on port 80. It's been 10hours now. The webserver has died! I really appreciate any help you can give me, I'm really struggling right now Thanks,
  12. Hi All, I've been planning to build an unRAID server for a while, and have been slowly accumulating parts. I thought I'd familiarise myself with unRAID before I set everything up properly once I have all my HDDs. My Rig So right now, I have a Xeon E3-1230 V2 with 8GB RAM. After a lot of research, I've bought my main parity HDD - a WD RE 6TB. I understand the parity will have the most read/writes and will endure the majority of the strain, so I decided to go all out and get something I could rely on. For the Data HDD's, i'll end up going for WD RED's, but for now while im testing it all out, I've thrown in a couple of old 2TB's (WD/SG) The Problem I'm on the trail license, so only 3 attached drives, which is fine. I cannot get the array to start with the 6TB WD RE. The array starts fine with the other drives, in any parity/data combo, but if i try and use the RE as parity, it just won't start. it tries, says it's spinning them up...etc, but on the page refresh, the array is still stopped. Strangely, I CAN get the array to start with just my 2TB drives(1parity/1data) with the 6TB RE as a cache drive; no problems there, it happily starts up and i can use the shares/chache no problems. it just won't start with the 6TB RE as parity. The drive is new, less than 4 hours power-on-time, and smart is reporting that it's completely fine. I've used it under windows and Linux and everything's fine, it performs great. Please can someone help me out, i really had my heart set on using unRAID, I don't wan't to go for another solution, and this drive was kind of expensive. Debug Info I've wiped the USB and installed a fresh version of 6.1.9, booted up, loaded the key, and tried to start the array with RE as parity, same issue. I've attached a full syslog, and a smart report of the drive. Thank you in advance for any help you may be able to provide. I'm sorry if I've posted this in the wrong area, or if I've missed something. Please let me know if there's any other information I can provide to help diagnose the issue. tower-syslog-20160306-0610.zip tower-smart-20160306-0632.zip