• Posts

  • Joined

  • Last visited

Recent Profile Visitors

1572 profile views

NAStyBox's Achievements


Contributor (5/14)



  1. Every time I shut down one or two drives turn red. Near as I can tell this also affects some LSI cards, and is purely hardware. So it's going bye-bye. I went through a few threads and the recommendations for cards were either high-dollar, or much older hardware. I'd like to find something in the middle somewhere, for under $200. What's good for nine 10tb+ spinners and four SSDs? I'd like to go with a 16 port, but I'll do an 8 with expander if the combo is cheap enough. I have an available x16 slot because I'm using onboard video, but I may want to throw a card in someday so if possible a 16 is preferred. The sas card that's in there sits in an 8x slot, and I have only that 16x and a 1x available. THANKS IN ADVANCE!!!
  2. I just noticed this section of the forum but I have to say for the most part I just don't get it. Sure, on your desk you want something that looks clean and neat. But for a server? Fans run the quietest in the basement, and the best fans keep your drive and CPU temps low. The best case facilitates cooling the best. From my perspective anyway. That's why my Unraid box sits in one of these.... It ain't sexy, but using those drive cages gives me fans in front of stacks of 4, and then there are 3 more 120's after it pulling it through. As a result I don't have heat issues, and knock wood I haven't had a drive failure with 8 spinners and 4 SSDs going on 4 years now. Nope, like a wife of 20 plus years the case doesn't look good in a bikini, it's not much on fashion, makes some noise regularly, and on the inside it's absolutely frigid. Just what I need. 🤠
  3. Since running mover with a docker up potentially causes everything up to hangs that force you to do a hard reset, I want to check my work before putting this in. So every night at 2am I shut down Unraid using a script with the powerdown command. About 15 minutes after that the zwave box its plugged into cuts power. In the morning power is restored and based on a bios setting the Unraid box comes up. This allows me to shave power usage by 25% every day since it's a 6 hour outage when we're all supposed to be sleeping. I do have a zwave button on the 2nd floor that allows me to easily turn it back on without having to traverse the basement should it be a late TV night. So far I haven't had to use it. Anyway, I thought the morning would be a great time to run mover. So I'm going to have Unraid come up at 7:30AM, and 5 minutes later run mover. What I'd like to do is have a docker start after mover ends. This is what I have. Please confirm. mover && docker start Emby-01 That just feels entirely too simple, so again, please confirm. ******Update******* Never mind. I got impatient and tested it. Works beautifully. I just wasn't sure how mover worked, and wanted to be sure the subsequent command waited until it was completely finished. So the server goes down at 2am, the power is cut 15 minutes later to ensure a clean shutdown. Around 7:30am the power is restored, and as per the bios setting the server turns on. 7 minutes later the mover script executes, and when that's done the dockers and VMs will start. Presto, 5.5 hours off the electric bill, no more crazy box-hanging conflicts with mover, and docker gets bounced once a day so no more if not at least less Docker hangs. No weird issues with mover and spinning up drives either since it goes off right after a startup.
  4. I'm having the same problem with an Emby docker. It could happen with more, but this is the only one I'm running. It just happened yesterday, and is happening again now. Multiple cores are at 100%, and it can't be stopped. I even tried /etc/rc.d/rc.docker stop to stop the entire docker service and that doesn't work either. I'm on 6.9.1 but this is not new. It has been happening since at least 6.8.3. I think I'm gonna have to set up a Centos server and run Emby in a VM until this gets addressed. The fact that this thread seems to be dead does not look promising, but here I go sending it back to the top with a new post. We'll see. ****************UPDATE**************** I forgot that I had opted to use the Binhex version of the Emby container. Just for giggles I replaced it with the official version from the folks at Emby. Not only is it running without incident, but it's faster. Imagine that. At one point I thought I knew better.
  5. ***ENHANCEMENT REQUEST*** It sure would be great if there were two options. One to suspend all scripts, and two to suspend individual scripts. All while retaining any custom scheduling.
  6. Yes, you would just change the setting "Use cache pool (for new files/directories):" in the share back to "Prefer" and run mover. Do this with VMs down.
  7. I just took out a ConnectX-2 and put in a ConnectX-3 to match my other machines, before the upgrade. No problems whatsoever on 6.9, and I'm about to upgrade to 6.9.1 tonight.
  8. Hello, I thought the custom schedule is 5 numbers separated by spaces, no? So if I want to run something daily at 3:03AM I thought it should be 3 3 * * *. What am I doing wrong? Thanks in advance!
  9. Yes I read something in this thread about some weirdness with the SSD pool, so I moved it all to the array. If you look at the cache settings you have to set a share to "Yes" if you want Mover to move it to the array. If you set it to "No" it leaves everything on the cache drives.
  10. That's one of the primary reasons I switched to "Hubitat". They not only allow you to back up the config completely, but it runs automatically by default. Off-topic I know, but I have to plug that thing. After years of constantly addressing Smartthings issues its nice to just USE smart home devices. Sorry, @limetech
  11. I've been using one of these since I started back in 4/2017. Nothing special about it other than it's the shortest I could find with nothing to snag.
  12. When 6.9 first came out I couldn't even load this forum. Using it today it was mostly fine, but I still had some intermittent issues loading threads. I've done a speed test right after and no issues on my end. So you may want to consider moving the server if others have seen this as well. I have a Lightsail instance that sits in Ohio and so far no issues whatsoever.
  13. I upgraded from 6.8.3 with no issues. However before I went ahead with the upgrade I read this thread. So just for giggles I did the following before upgrading. 1. Disabled auto-start on all dockers 2. Disabled VMs entirely 3. Set Domains and Appdata shares to Cache "Yes", and ran mover to clear my SSDs just in case an issue came up. They're XFS. 4. Backed up flash drive 5. Rebooted 6. Ran upgrade 7. Rebooted 8. Let it run 20 minutes while I checked the dash, array, and NIC for any issues. 9. Reenabled Docker autostarts and VMs without starting them 10. Rebooted ...and I'm good as gold. In fact the whole house uses an Emby Docker and the array is so fast I think I might leave it there.
  14. NAStyBox


    I did see that one, but it's nowhere near what I'm looking for. I'm not interested in any sort of a learning curve as the data I'm looking to extract is relatively basic. It would be like using Wireshark to determine if your broadband connection is up or down.
  15. NAStyBox


    It's of course easy to take a SWAG at pins and memory, but I was wondering if there's an Unraid specific performance tuning app out there I might have missed? I just migrated to a 9900k with 64gb of memory, and I want to change pins on some Dockers as well as maybe tweak some VM settings. Is there anything out there that monitors Docker and VM CPU and memory usage over a period of time? Even better would be an app that actually tells you what to change, but manual is fine. I just need the data. Thanks in advance!