NAStyBox

Members
  • Posts

    344
  • Joined

  • Last visited

Everything posted by NAStyBox

  1. Every time I shut down one or two drives turn red. Near as I can tell this also affects some LSI cards, and is purely hardware. So it's going bye-bye. I went through a few threads and the recommendations for cards were either high-dollar, or much older hardware. I'd like to find something in the middle somewhere, for under $200. What's good for nine 10tb+ spinners and four SSDs? I'd like to go with a 16 port, but I'll do an 8 with expander if the combo is cheap enough. I have an available x16 slot because I'm using onboard video, but I may want to throw a card in someday so if possible a 16 is preferred. The sas card that's in there sits in an 8x slot, and I have only that 16x and a 1x available. THANKS IN ADVANCE!!!
  2. I just noticed this section of the forum but I have to say for the most part I just don't get it. Sure, on your desk you want something that looks clean and neat. But for a server? Fans run the quietest in the basement, and the best fans keep your drive and CPU temps low. The best case facilitates cooling the best. From my perspective anyway. That's why my Unraid box sits in one of these.... https://www.amazon.com/Rosewill-Rackmount-Computer-Pre-Installed-RSV-R4000/dp/B0055EV30W/ref=asc_df_B0055EV30W/ It ain't sexy, but using those drive cages gives me fans in front of stacks of 4, and then there are 3 more 120's after it pulling it through. As a result I don't have heat issues, and knock wood I haven't had a drive failure with 8 spinners and 4 SSDs going on 4 years now. Nope, like a wife of 20 plus years the case doesn't look good in a bikini, it's not much on fashion, makes some noise regularly, and on the inside it's absolutely frigid. Just what I need. 🤠
  3. Since running mover with a docker up potentially causes everything up to hangs that force you to do a hard reset, I want to check my work before putting this in. So every night at 2am I shut down Unraid using a script with the powerdown command. About 15 minutes after that the zwave box its plugged into cuts power. In the morning power is restored and based on a bios setting the Unraid box comes up. This allows me to shave power usage by 25% every day since it's a 6 hour outage when we're all supposed to be sleeping. I do have a zwave button on the 2nd floor that allows me to easily turn it back on without having to traverse the basement should it be a late TV night. So far I haven't had to use it. Anyway, I thought the morning would be a great time to run mover. So I'm going to have Unraid come up at 7:30AM, and 5 minutes later run mover. What I'd like to do is have a docker start after mover ends. This is what I have. Please confirm. mover && docker start Emby-01 That just feels entirely too simple, so again, please confirm. ******Update******* Never mind. I got impatient and tested it. Works beautifully. I just wasn't sure how mover worked, and wanted to be sure the subsequent command waited until it was completely finished. So the server goes down at 2am, the power is cut 15 minutes later to ensure a clean shutdown. Around 7:30am the power is restored, and as per the bios setting the server turns on. 7 minutes later the mover script executes, and when that's done the dockers and VMs will start. Presto, 5.5 hours off the electric bill, no more crazy box-hanging conflicts with mover, and docker gets bounced once a day so no more if not at least less Docker hangs. No weird issues with mover and spinning up drives either since it goes off right after a startup.
  4. I'm having the same problem with an Emby docker. It could happen with more, but this is the only one I'm running. It just happened yesterday, and is happening again now. Multiple cores are at 100%, and it can't be stopped. I even tried /etc/rc.d/rc.docker stop to stop the entire docker service and that doesn't work either. I'm on 6.9.1 but this is not new. It has been happening since at least 6.8.3. I think I'm gonna have to set up a Centos server and run Emby in a VM until this gets addressed. The fact that this thread seems to be dead does not look promising, but here I go sending it back to the top with a new post. We'll see. ****************UPDATE**************** I forgot that I had opted to use the Binhex version of the Emby container. Just for giggles I replaced it with the official version from the folks at Emby. Not only is it running without incident, but it's faster. Imagine that. At one point I thought I knew better.
  5. ***ENHANCEMENT REQUEST*** It sure would be great if there were two options. One to suspend all scripts, and two to suspend individual scripts. All while retaining any custom scheduling.
  6. Yes, you would just change the setting "Use cache pool (for new files/directories):" in the share back to "Prefer" and run mover. Do this with VMs down.
  7. I just took out a ConnectX-2 and put in a ConnectX-3 to match my other machines, before the upgrade. No problems whatsoever on 6.9, and I'm about to upgrade to 6.9.1 tonight.
  8. Hello, I thought the custom schedule is 5 numbers separated by spaces, no? So if I want to run something daily at 3:03AM I thought it should be 3 3 * * *. What am I doing wrong? Thanks in advance!
  9. Yes I read something in this thread about some weirdness with the SSD pool, so I moved it all to the array. If you look at the cache settings you have to set a share to "Yes" if you want Mover to move it to the array. If you set it to "No" it leaves everything on the cache drives.
  10. That's one of the primary reasons I switched to "Hubitat". They not only allow you to back up the config completely, but it runs automatically by default. Off-topic I know, but I have to plug that thing. After years of constantly addressing Smartthings issues its nice to just USE smart home devices. Sorry, @limetech
  11. I've been using one of these since I started back in 4/2017. Nothing special about it other than it's the shortest I could find with nothing to snag. https://www.amazon.com/Verbatim-16GB-Store-Flash-Drive/dp/B00RORBNSK/ref=sr_1_2?dchild=1&keywords=verbatim+store+n+go+16gb+nano&qid=1614880234&sr=8-2
  12. When 6.9 first came out I couldn't even load this forum. Using it today it was mostly fine, but I still had some intermittent issues loading threads. I've done a speed test right after and no issues on my end. So you may want to consider moving the server if others have seen this as well. I have a Lightsail instance that sits in Ohio and so far no issues whatsoever.
  13. I upgraded from 6.8.3 with no issues. However before I went ahead with the upgrade I read this thread. So just for giggles I did the following before upgrading. 1. Disabled auto-start on all dockers 2. Disabled VMs entirely 3. Set Domains and Appdata shares to Cache "Yes", and ran mover to clear my SSDs just in case an issue came up. They're XFS. 4. Backed up flash drive 5. Rebooted 6. Ran upgrade 7. Rebooted 8. Let it run 20 minutes while I checked the dash, array, and NIC for any issues. 9. Reenabled Docker autostarts and VMs without starting them 10. Rebooted ...and I'm good as gold. In fact the whole house uses an Emby Docker and the array is so fast I think I might leave it there.
  14. NAStyBox

    Performance

    I did see that one, but it's nowhere near what I'm looking for. I'm not interested in any sort of a learning curve as the data I'm looking to extract is relatively basic. It would be like using Wireshark to determine if your broadband connection is up or down.
  15. NAStyBox

    Performance

    It's of course easy to take a SWAG at pins and memory, but I was wondering if there's an Unraid specific performance tuning app out there I might have missed? I just migrated to a 9900k with 64gb of memory, and I want to change pins on some Dockers as well as maybe tweak some VM settings. Is there anything out there that monitors Docker and VM CPU and memory usage over a period of time? Even better would be an app that actually tells you what to change, but manual is fine. I just need the data. Thanks in advance!
  16. Y'know, I'd really like to engage you folks on this, but I won't. All I'm going to do is point out what's obvious to anyone that has ever marketed software professionally. 1. NO ONE has asked me why I was annoyed at the difficulty of determining a way to schedule a simple task in the product. 2. NO ONE has asked me for specific feedback regarding the "user scripts" plugin as a solution. 3. NO ONE has asked me what could have made the experience a better one. Instead I get an empty-handed defense of the morbidly ineffective that for some odd reason at least one of you believes adds value here. ...and with that I'll leave you with my response to people when I see someone ask about Unraid. "It's a great core product for home use that I trust my data with. However it's cumbersome to use beyond the simple act of building it and creating shares, with limited focus on polishing the product to overcome those issues by its development team. But in the end because my data is safe, not completely secure but safe, I tolerate it. If I need something that does more than provide a stable array I've found running a second product to accomplish those tasks is probably the better way to go." WOW, I've hoped for a long time that I could say something better. But instead I find myself testing Xpenology and taking a 5th look at Freenas, or whatever they're calling it this week. Sadly, for my purposes nothing has changed yet.
  17. Oh my god just stop. You people are hopeless.
  18. Broken, poorly thought out alpha-ware, call it what you will. But when basic functionality has been removed from something like the OS this claims to run on that functionality should be available elsewhere in a prominent location. Not a "user scripts" plugin. ...and FYI, this is the kind of thing that makes people ABSOLUTELY DESPISE Linux. To watch people eff things up seemingly for the sake of creating a fruitless learning curve and then call it "ingenious" is infuriating. "Adverse effects on the system". LOL... Please, let's not get into why if anyone ever tried to use this product in an enterprise environment the number of hours before their termination could be tallied with fingers. As Unraid is targeted at home users, the level of frustration required to use it should decrease with every release, not increase. It does work well for home use. The problem is there are design flaws like this that make it as annoying as some enterprise products without any of the benefits such as viable security and stability.
  19. Ahh, thank you. So the Reddit doc was wrong, and you all have broken the crontab functionality. Because on every other flavor of Linux I've used crontab -e writes to files, not memory. Things like that should become a feature of the Unraid GUI, and running crontab at the CLI should direct folks to that method. Just sayin.
  20. I tried crontab -e, and the next day my entry was gone, and never executed. I read something on reddit about creating a name.cron file on the boot drive in the plugins folder, and that worked as well as my first try. Where should I be adding cron jobs in Unraid?
  21. These work well for internal use.... https://www.ebay.com/itm/9-Pin-Motherboard-To-2-Ports-Usb-2-0-A-Female-Internal-Header-Adapter/233642166033?hash=item3666288711:g:I8kAAOSwcT1fA6PO
  22. I've had thumb drives come apart in my hand when pulling them out. Having had to clean up a lack of prep for a living it's just one of those things I'm accustomed to doing. The effort is minimal but the payoff is huge if something like that should happen.
  23. Hah! I didn't even notice that was there. All this time I would go in and do the CP and pull it off a share. Thank You!
  24. Yes, but to get all the actual files you would need to go CLI at this point, as far as I know. I just create a directory on a share, cp /boot to that directory, zip, and copy off to a PC. I mean it's not something we need to overcome an obstacle. It just makes things more user-friendly. The lack of which is what hurts Unraid in the market. You should see the objections I read before people go out and buy one of those glorified toasters, or opt for some other distro like Xpenology. I read one guy say that Unraid is no longer supported, last week, lol. Think about the psychology behind it. Single-step preparation to migrate to new hardware can alleviate a lot of stress. None of the "what did I forget?" stuff. I mean I'm fine with things as is for the most part, but I'd like to see the product live on and grow more because I have no plans of going anywhere.
  25. As I'm preparing to swap out my mobo, cpu, and sas card, I thought "hey, why is this a manual process"? By preparing I mean printing PDFs of the drive slots, making a copy of the files on the flash drive, etc. So I thought "Hey, wouldn't it be great if this were a one-click process?" What I'd love is to download a zip file created by the system that includes ALL of the config info, a copy of the contents of the thumb drive, and anything else that may aid either myself or support in recovering any issues experienced in a hardware swap. Maybe an option to automatically disable auto-start on all Dockers and VMs as well with the process. Even cooler would be a means of entering your proposed new hardware to be checked against a database of known issues, but that's a whole nuther' project.