Jump to content

Frayedknot

Members
  • Content Count

    25
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Frayedknot

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I love unraid and to be honest I have a container of every badge for computer products that I've received in the past. I like to keep my computers clean of stickers. Although if I got a badge I would feel inclined to use it. The irony is that unraid is so stable and has been so great that I keep it in the furnace room and nobody even gets to see it. Great product and happy birthday!
  2. I have the same issue. Going back to 6.5.3 seems to be stable. I've tried 6.6.0, 6.6.1, an 6.6.2. I have since jumped to 6.6.4 and 6.6.5 and it has been good. I thought it was possibly related to a realtek lan card, but because my system hard-stops that seems different that those symtoms. My console or syslog both didn't show anything. Are you now running 6.6.3 or did you upgrade with 6.6.4/6.6.5. .. maybe it's even a plugin that got an update!? Just throwing wild ideas out now.
  3. I'v been trying to diagnose a crash with Unraid 6.6.x series for a while to no avail. Unraid 6.5.3 is stable and rock solid and has been since I believe the 4.7 days. (if i remember the version number correctly) The machine just hard crashes, in that nothing responds. IE the keyboard no longer functions on the console and of course will not respond to the network. I've tried "Fix Common Problems" trouble shooting mode and when the crash happened nothing was in the logs. Through reading the forums i learned another tip, on the console I just tailed the syslog so hopefully I see something reported that way. I swear the machine knows when I'm looking at it, because this morning I just checked on it and it was fine. Checking means change the source input on my monitor to unraid and look at the logs and test the keyboard. 20 mins or so later it crashed and no messages on the syslog for it. This thing can run for about 6 days without crash or just hours, and I've tried this with 6.6.0, 6.6.1, and 6.6.2. The temperature is pretty much always less than 30c (It's water cooled). I've tried memory test and nothing reported either. Again, 6.5.3 works fine I don't honestly think it something wrong with the hardware. So where do I go from here? Motherboard:ASUSTeK COMPUTER INC. - P8H77-M PRO Memory:16 GB (max. installable capacity 32 GB) Processor:Intel® Core™ i5-3570 CPU @ 3.40GHz I included the hardware profile in case that helps. FrayedKnot_Hardware_Profile.txt
  4. You could very well be correct, I just assumed it would stop working outside of the 30 days trial and didn't even consider that was how the license might work. If that is indeed how it works then it's a perfect solution for what I need.
  5. I prefer to to preclear my drives on another machine before putting them in my system but I only have one license. I used to use a trial version key to do this, but now with 6.4 it sounds like in 30 days it will cease to work for this purpose. Is there any possibility of a version that doesn't have the ability to start an array for just this type of purpose? I really like the web UI to do the preclearing and I'd like to stay up to date with the changes with the preclear scripts. I probably won't need it again for a long time and just request another trial; but I thought I'm make the request to see if there is better solution I can rely on when I need to preclear some more disks. I've been using unraid for a while now and I must say that the product as well as the community are totally awesome. Keep up the great work!
  6. I think doing the update fixed the error message on the setting screen. Also I have a 13 dockers (11 running). I changed the scheduled times for docker/plugin updates so I can see which one is giving me the error. With my luck the process of trying to diagnose it will fix the issue. (I hate that when that happens)
  7. I have this problem as well. I don't know when it started, but it has been about a couple of weeks-ish. I do see a on the CA Plugin Auto Update Settings page. I included a screen shot so you can see my options.
  8. (According to my APC UPS unit) Mine does 40 watt with spun down drives and 60 watt when spun up. If it's doing trans-coding or unraring a large file it can go just more than 100 watts. Core I5-2320 8GB Ram 6 Drives totaling 13 TB usable storage and 3 fans. 13 Dockers active It just amazes me what this system can do. All at the power it takes to light a small room with an incandescent light bulb.
  9. I just changed over my 250Gb WD Black for a Intel 530 250Gb SSD. The process was easy and followed the process from here https://lime-technology.com/wiki/index.php/Replace_A_Cache_Drive The only real issue I had was it reformatted as brtfs for some reason!? My defaults were xfs but yet it chose brtfs. Then my docker was complaining about known issues and data corruption and I should rebuild with the newer version. The version I have is 1.7.1 for docker. Couldn't find any info on what the current version is. I reformatted it back to XFS and recopied the cache data back and that warning did show any longer on the docker settings. Should I be rebuilding the docker anyway? The good news is that almost everything feels snappier now. All my plugins and dockers are on the SSD and it's so much faster. Updates are immediate as well as the web pages to the dockers. Any library updates (Couch potato, Plex) or extractions (sabnzbd) are pretty fast now. I didn't expect to notice that much difference but it's nice to have it.
  10. Update showed up on unraid docker page and has been applied. She worky now Thanks.
  11. Me three... it no longer starts (Exit 127) The Deluge docker from BinHex was also doing the same thing after an update, but after a few more updates (forced and one update that became available) it now works. So I suspect it's being fixed... hopefully.
  12. I like openelec it speaks to what what I want but I use Plex because I want a central database and not have to setup each device's source material manually. Especially when it comes down to manually adjusting the results being returned by the scraper. Also, the thought that all my would be scraping data just bothers me on the wasted cpu and internet usage. I've considered trying to make my own SQL central db with kodi/openelec but i use my laptop and a couple of tablets and I have different versions of Kodi/openelec devices anyway (IE Openhour Chameleon) so that isn't going to work for me. I've setup a Emby docker, but i haven't tried connecting to it yet. Honestly, Plex does everything I want and does it pretty good.
  13. Ya, that works for me. I feel bad for finding an issue with this awesome product. It isn't a big deal, and the work around is reasonable. I'm glad you replicated the issue, cause I don't have any more drives to do now. Thanks again for making unbalance. With the use of Unbalance I was able to convert from RFS to XFS on all drives without rebooting, putting any risk into the integrity of the data or any down time. It took a long enough time, and I probably stressed my parity drive during the process but it worked beautifully.
  14. Thanks Fraynedknot, yes it really seems something related to the freshly formatted disk. If you still have one more of these cycles left, cold you please run df --block-size=1 /mnt/disk* from the command line, right around the time unBALANCE is showing 0/0 ? I'll try to replicate in my test setup. No change; by the way I also just upgraded to 6.0 version before this format in case that might have some impact on this. (So the current release version also has the same issue) BEFORE (after data moved off) root@Tower:~# df --block-size=1 /mnt/disk4 Filesystem 1B-blocks Used Available Use% Mounted on /dev/md4 2000337846272 33628160 2000304218112 1% /mnt/disk4 AFTER FORMAT TO XFS root@Tower:/mnt# df --block-size=1 /mnt/disk4 Filesystem 1B-blocks Used Available Use% Mounted on /dev/md4 1999422144512 33751040 1999388393472 1% /mnt/disk4 I should also mention that my unraid web stopped working after the last unbalance; but i'm sure that wasn't unbalanced fault. (Although I was monitoring unbalance from two separate computers, and it stopped working right at the end of the process. I got the finished and then no more worky with unraid web menu). I don't expect anything from this, but I figured I'd let you know in case i'm not the only one that experienced it. I'm not going to restart my array right now in case you want to try something else; and I'll also try a restart of the unbalance docker in an hour or 3 to see if anything changes that way too.
  15. Hi Frayedknot, thanks for the kind comments ! I'm not exactly why this would happen. After the folder move is completed, it re-reads the free space of each disk, so the recently emptied drive should show up as such. On the other hand, I remember now that df linux command take a bit to show the very latest info (depending on which processes are holding on to file references) This is normal behaviour and happens to me in another server I have. So, a possible scenario is : - unBALANCE moved data off the drive * Inmediately the information is refreshed, but it still shows the disk as used - Stop array, etc. Did you leave the unBALANCE docker running? Or you restarted it ? Or just refreshed the browser page ? Well.. it happened again, and my steps were accurate. I didn't have the docker running before and even just in case I stopped unbalance docker and restarted it. Unbalance seems to think it's 0 bytes (on a newly formatted drive - i assume) Here's the screen shots of my Unraid Main Page, the unbalance before (after the format with the issue happening) and after the array was restarted when it shows properly. (wow.. 192 k limit eh.. was hard to add 3 screen shots; hopefully a zip file is ok) Oh and apparently I'm running the current version of Unbalance and docker. (unBALANCE v0.7.3-157.f2ebeef) Unbalance_Screenshots.zip