hypyke

Members
  • Posts

    196
  • Joined

  • Last visited

About hypyke

  • Birthday 09/01/1975

Converted

  • Gender
    Male
  • Location
    Atlanta, Ga

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

hypyke's Achievements

Explorer

Explorer (4/14)

0

Reputation

  1. I missed the faq post because I misread it as specifically about the technical aspects of running the docker based on the comment below the link. That was indeed my mistake. I do appreciate all the kind comments though.
  2. What is the advantage of this docker over the existing plugin?
  3. Nope. I will if it happens again. Still, not deleting a docker during a config change because of a typo would be nice.
  4. I wasn't aware of that and had to go searching. Still, you have to re-setup the docker settings and such. It's not nice.
  5. I am not sure if this needs to be a BR or a feature request of if it's just "the way it works" If you edit a docker container and that edit fails, for example I added a path and didn't use a slash, ex. Container path: something instead of : Container path /something The change is made and when you try to apply it deletes the docker first and then fails to reconfigure and you end up with no docker. Should there be some validation, either on the Apply or at least before the old docker is deleted? It's an easy mistake to make.
  6. With the array stopped, an image is broken on the Main tab. The file is sum.png. See the attached screenshot.
  7. I am really up the creek now with this issue. I have a z390 asrock board and a Sandisk Ultra FLAIR (not fit) and it's giving me the same kind of issues. Some sort of corruption after reboots and then the /boot is empty issue. Unplugging and replugging seems to have no effect. Once it's messed up a reinstall is the only thing I can do. The drive TESTS fine in windows and I've reformatted and reinstalled using the correct procedure which only fixes it for a while. I plug my old Sandisk usb 2.0 drive in and it works fine it seems. Of course I've done a key replacement already so the old one is black listed on the next upgrade. I have no idea what to do. I have 2 pro keys, one is now on this drive that won't work. The other is on an old drive that's only 256 megs. I could upgrade that one but I am unsure if it will be stable.
  8. There are some old topics in the forum and the docs aren't very clear. I now have the current ver and I'm running docker containers and such. What is the correct way to cleanly power down the array from the CLI? the powerdown command has no help Will that powerdown safely? I am trying to script it from another machine so I can powerdown everything in case of a storm or something. Thanks.
  9. Cool. To note though: If Sonarr is running as a plugin then all it's writes would not be to network shares but local shares and I don't think that would trigger the issue I was/are having. I think it's specific to Sonarr writing to unraid over the network. I THINK.
  10. EDIT: This might be the wrong thread for the issue I am having. I am really not sure so, whatever..... :-) I still don't have a clear indication of what is causing the problem to be honest. There is a core issue, and then there is a residual issue caused by the core issue, I think, but I am stable now and not really testing anymore. I can tell you that stopping the drive spin downs and turning off the cache drive (and thus the mover) did not solve the problem for me so I don't think the mover causes the issue. I think it is affected by the residual issue. However, I also haven't had a lockup for two weeks and I think that was due to one change made that doesn't involve unraid. I disabled Sonarr (Which runs on another server, not unraid) from being able to rename and move files to unraid. I had turned this on a while back so that I could stop using a different media sorter. I went back to my old media sorter and it's been stable. This is hardly conclusive, I know, but here is what I found: The Core Issue Something, in this case, for me, it was Sonarr, is writing over the network and triggers the core issue. This causes the samba duplicate processes as it's write has failed and it's retrying over and over again. Once the array is in this state it must be hard rebooted. All attempts to stop the array manually or kill processes fail. The hard reboot CAUSES THE RESIDUAL ISSUE. Now, while the core issue has been triggered, samba stops responding to network requests but it also locks files on the array, so while in this state the mover will not run either. It runs and hangs due to the general state of the array and it's processes. When you find the machine in this state it SEEMS the mover has caused the issue but I it happens to me with the cache drive disabled completely. The Residual Issue After the hard reboot, the filesystem on some or all of the drives is in a bad state. Transactions need to be replayed and sorted and the FS needs to be checked. Unraid, not knowing what has happened before does not know to do this for you. After every single crash since I have been troubleshooting, unraid simply reboots and tries to do a parity check, it does not check for issues with the transaction logs on the disk filesystems, yet if I stop the array, put it into maintenance mode and check the FS manually I get ton of replaying journal messages and they are ALWAYS on disks that were hung before the reboot (I can check that by looking at lsof output before rebooting). So, if you DON'T do the filesystem checks after a reboot then the mover can hang AGAIN because of FS issues and it SEEMS like it is the same issue. I verified this by shutting down all the apps I have on the net which write to the array and running the mover immediately after a reboot WITHOUT running FS checks. But this lockup is not the same because samba is not misbehaving. So, how did I test?: I turned off disk spindown and the cache drive altogether at Tom's suggestion. Those have been off since he asked so all of my testing since is with them in that state, eliminating the mover or disk spindown as the cause of the issue. I shutdown both applications on my network that do writes to the array, at different times, to find the cuplrit. Sonarr and Emby are the two programs and neither runs on the array itself. Sonarr moves files to the array and Emby writes some metadata from time to time. I created different users for every machine on my network so I could trace the one causing the samba issues. What I found was: The one causing/triggering the issues was always the server running both Emby and Sonarr. Reads never seem to cause the problems. Writes are what triggers the CORE issue and ONLY writes over the network. With both Emby/Sonarr shutdown the array was stable. After every hard reboot there needed to be FS checks run first or even writing data to the array using windows explorer would cause lockups and failures. Eventually I ran Emby only for a while and it was still stable. When I turned Sonarr back on the problem came back. So, I reconfigured Sonarr to stop writing to the array at all and started using MetaBrowser, which I used to use, to sort and write my TV progs to the array. In this configuration my array has been up for an hour shy of two full weeks. The only conclusions I can draw are: Writes over the network are causing the issue, but not all writes. ----------------------------------------------------------------------------------------------------- I have no idea if it's Sonarr, or the WAY Sonarr is doing it's writes, or if it's Samba that is the problem. (I think we can assume that it's not Sonarr exclusively that is the issue because not everyone having problems is running it.) I have no idea if Reiserfs is a factor in the CORE issue. (Suggestions in this thread seem to point that way.) I don't know that this post does anything but add more confusion to the issue. :-/
  11. That is the speed at which the parity drive is read without any other drives to read and calc parity. Ok, so are there some sample parity check and rebuild times using the 8tb drives With a 4+4R0 Parity? That would be my main concern.
  12. So I am trying to understand the speeds here. Considering pkn's last post it SOUNDS like speeds are great with the archive drives as long as you use a non truncating raid card and two 4tbs for parity. What is the "free-falling" speed mean. 330MB/s sounds insane to me but I am only hardware and my parity and rebuilds top out around 80MB/s
  13. I just tried that and I didn't get any redball. Yeah, me neither, I run smart reports on a schedule, daily, at night, and never had a drive redball like that. Both times mine dropped, I was streaming video.
  14. Thanks for the link. I setup a dual boot with it on my flash drive. The next time I have to reboot or have issues I think I will give RC5 a shot. I'm also very interested to see if my parity check and rebuild speeds go up, like so many have reported. Currently, I am stable. Last night I copied a bunch of stuff to the array, via the cache drive, and the mover ran without issue later that evening.