• Posts

  • Joined

  • Last visited


  • Gender

fcaico's Achievements


Newbie (1/14)



  1. That issue doesnt seem like the same thing I experienced. In my case the array was not doing anything (and I have my mover scheduled for the early morning hours) and another PC on the same network was able to stream a movie from the array without issue. It was only the VM that had issue.
  2. So I've been running Unraid 6.4 for quite some time without issue. I use unraid for a variety of things - file storage, dockers, and virtual machines - one of which, a windows 10 vm, has a video card pinned to it and I use it as an HTPC for my home theater. Today I decided to upgrade Unraid to 6.7 as I was doing some routine maintenance. The upgrade went smoothly and I assumed all was well until later on when I tried using my HTPC to watch a movie served from the array. The movie would occasionally freeze for several seconds and then continue, but this was quite annoying. Trying to watch this movie on another computer in the house was fine - it was just from the windows VM that I had issue. Unable to figure out what was wrong, I eventually tried downgrading my unraid OS back to 6.4 and sure enough now things are running well again. My unraid box has 32 Gb of RAM and Ive got 12 Gb allocated to the VM along with 4 threads (2 cores) the domains are cache only. Any thoughts? Id like to be current with my unraid installation... Thanks!
  3. I did the upgrade and now see this warning in the web UI after restarting the array: unRAID Cache disk message: 04-02-2017 12:39 Warning [uNRAID] - Cache pool BTRFS too many profiles MKNSSDE3240GB_ME16021910019DE84 (sdd) What does this mean? How serious is it? and why am I only seeing this now (I restarted my array without incident 2 days ago). Frank
  4. Since I know myself and others have had MCE issues in the past (with memtest usually not finding an issue), I was curious if LT might consider adding mcelog from http://mcelog.org/index.html to the unRAID betas? I may be mistaken, but from what I've read it seems to be the only way to acertain what exactly an MCE log event was actually caused by (even if ultimately benign). That's a great idea, and I agree. If it's not too large, I hope LimeTech will consider adding mcelog, and run it in the recommended daemon mode. I'm not sure it's the best way, but you might also use the --logfile option for persistence, force the logging to /boot (don't know how chatty this is though). Without this, we really don't have any tools for solving users' MCE issues. Plus, this actually can in some cases sideline faulty memory and processes, and possibly other live fixes, allowing continued operation and better troubleshooting. I could add mcelog to the NerdPack 6.2 repo. Not familiar with NerdPack... Is that a plugin?
  5. I'm not VM experienced, so others might be better, but here are a few comments - * Several MCE's (Machine check events) were noted, no apparent cause. You may want to try a Memtest, etc. As hardware events, it's hard to relate then to any specific software symptom, but they may be a source of trouble. Yeah, I've seen those. Not sure what is causing them. I ran memtest for 36 hours and it came up with nothing, so i dont think its ram. Not quite sure how to proceed on that front. What causes CPU stalls? Its odd, because CPU 6 IS pinned to my Dev VM, but that has been an extremely light duty VM. CPU 8 and 9 arent pinned to anything however! After the reboot its been 3 days 13 hours of uptime with normal usage patterns so we'll have to see if this happens again....
  6. My system became unstable yesterday after a long run (almost 30 days) of being stable with 6.2.0-beta21. Web services and VMs became unresponsive. Not sure if the Shares did as well, but I assume the did - though in retrospect I regret not checking. I could SSH in (and did so), run Powerdown -r. diagnostics were saved bu the system COULD NOT shut itself down as devices remained busy. Here is my diagnostics file: unraid-diagnostics-20160613-2116.zip
  7. I have a question about file share configuration and how dockers use them. If a file share is set up to use the cache, do dockers running on the unraid server honor this? e.g. If a NZBGet docker is downloading a large file to a file share which is marked to use the cache, are all of the rights going through the cache?
  8. SO its been over a week now and everything has been running great. Here's what I did to solve the problem 1) Adjust all my shares so that any large copying/moving between folders happens on the same share. About the only files I have that are large enough to cause the SMB lockups when moving between shares are movie files. I've since put my downloads and Movies folders on the same share. 2) switched from SABNZB to NZBGet. I miss the ratings capabilities of SAB but otherwise NZBGet is great. After doing those two things my SMB shares no longer get 'locked up' or held busy causing me to lose the UI and other functionality (including most importantly large chunks of my SMB file system).
  9. Sorry, I couldn't upload the diagnostics earlier. I was off network. I've now attached it. As far as SABNZB goes, Like I said - I've also been able to cause the SMB lock merely by copying a large file between 2 shares...
  10. OK So I've been going at this for three weeks now and I'm at the end of my ability to figure out any more things I can try. I use my unRAID server as a media server, an application server for SABNZB, Couchpotato, Sickbeard and Mariadb and lastly as a vm host for a windows 10 vm (with GPU and USB passthrough). I started with the latest beta of 6.2 as it seemed relatively stable and had the features I wanted. First the good: The OS has never crashed on me. Not once. The VM Seems to work just great. my GPU is passed through nicely as well as the USB ports (and this is on a system with no onboard video and a nVidia GPU!) File serving is just fine (as long as all I'm doing is reading) but writing across shares or moving large data around a share is problematic. I first noticed the problem that if I let SABNZB download large files it would hang up the disks keeping them locked. I would lose all SMB access and the web would hang up as soon as I did any operation that accessed the shares. If I turn off couch potato or don't allow SAB to do large downloads (TV shows were still fine apparently) then everything is stable. I can go for a week without trouble. I then found out that if I copied a large file (8-10 GB) between 2 shares I would lock up the SMB in just the same way! I tried the following experiment: Copy large file from another machine to a share on the uNRAID array : No Problem Copy large file from a share on one disk to another share on a different disk : Locks up the SMB Copy large file from a share on one disk to another share on a same disk : Locks up the SMB Copy large file from a share on one disk to the same share on the same disk : No Problem Copy large file from a share on one disk to the same share on another disk : No Problem This was fascinating. Perhaps a defect in unRAID beta regarding moving between shares? So I tried putting my movies, tv shows and downloads all within one share (Media). I still get the SMB lockup. :-( I've tried adjusting docker settings, I've ran the common problems plugin etc. can't figure out any solution to this. These SMB lockups are such that I can't kill the processes holding the files open and the only solution is to "powerdown" and then after hold down the power button on my server until it shuts down - then restart. I have yet to have this make me lose data at least. Before I did anything I ran a memtest for over 24 hours without any errors. There doesn't seem to be anything in the logs indicating a problem even happens (I do see 2 mce [hardware error]s in the log at boot up, but I haven't found anything to explain them: checked all the cabling and no issues in the smart logs - temps all seem good). Question: Should I downgrade to 6.1.9? Is this likely even to help? I can't figure out if there is an easier way to downgrade other than to reformat my USB flash drive and load 6.1.9 on it and reconfigure everything. Is that the only/best way? I really want to figure all this out. Ive spent a ton of time on it and I appear to be out of my depth. Frank
  11. So I'm still having this issue. If Im just reading files off the file system (or light writes) everything seems fine for days (had a previous up time of 7 days). However it seems that large writes cause the problem (If I queue up some big downloads in SABNZB this happens). Again, one or more disks are held busy by a process (usually SMBD or SHFS) and I can't kill them. Yesterday I restarted my server after such an occurrence and (From a windows machine) moved a large directory (about 15GB) from one user share to another. Halfway through the process it hung. SMB stopped responding and when I attempted to SSH into the machine I was able to, but I was unable to stop the move. I ended up having to restart. Could this be the problem? Moving large files around? SABNZB does this when it completes a download... so does Couchpotato... Anyone? I'm pretty desperate to resolve this.
  12. Hmmm Thats pretty interesting. I have a mix of drives - and 3 of them are WD 6TB Red drives. I've noticed that sometimes its not the Red drives which are held 'busy' by some process, but I suppose since my Parity drive IS a Red drive this *might* be my issue... UPDATE: I just checked an all my drives Spin Down Delay is set to "Use Default" which is set to Never So that does not appear to be it.
  13. This also seems like the problem I reported on my thread: https://lime-technology.com/forum/index.php?topic=48581.0 Everything is fine with the system running, and even running SABNZB, mariadb and sickbeard dockers and a win 10 vm (for nearly a week straight) as soon as I also turn on couchpotato and it downloads more than a trivial amount, I see the same symptoms as the OP here.
  14. You should upload your diagnostics next time it happens.