Jump to content

JonathanM

Moderators
  • Posts

    16,686
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Sure. It's dirt simple. The commands for containers are just docker start <container name> docker stop <container name> docker pause <container name> docker unpause <container name> to start a VM virsh start <VM Name> Here's a little script that waits on an IP to return a ping before continuing #!/bin/bash printf "%s" "waiting for 192.168.X.X ..." while ! ping -c 1 -n -w 1 192.168.X.X &> /dev/null do printf "%c" "." done printf "\n%s\n" "192.168.X.X is online" virsh start X docker start X You can insert timed pauses with sleep <seconds>
  2. Well then. Yep, I'd either remove the offending shield or not use a card with components that can touch the bare metal. See my comments in that thread, to sum it up, SOMEBODY messed up. Either the standards organization did a poor job of defining the card or the slot, or one of the manufacturers is not following the standard by putting metal or components that close together. 3 different entities, any one of which could be at fault. End result, don't use those components together without modification.
  3. Where did you find those installs? You need to be using Community Applications (Apps Tab) to install any add on's to Unraid.
  4. Stopping the containers doesn't stop the underlying docker service, and as long as the service is active the image will be mounted. Shouldn't stop the array from shutting down though.
  5. I'm not talking about the previous apps portion. That would be handy, but like you say is not going to necessarily be what is intended. I'm talking about the point in time when CA Backup does its thing, and backs up your appdata. That way, if you lose your cache pool and are restoring from ground zero it gets restored with the backed up appdata set, which theoretically would match the containers you want to reinstall and get running, at least to a large extent. Also, it would be handy if installing a container wouldn't immediately start it, but instead offer an option like VM's do, to set it up but not start. When I had to recreate my cache pool for real, the multi install put things back and started them in an order which created some errors and chaos, because of the way I have things set up with dependencies and scripted starts. Having everything restored but not started would be ideal. I script some containers startup with logic that can't be handled with the built in binary auto start with delay. I need conditionals, some of which were handled by your deprecated auto start plugin, some of which I coded myself. Since auto start and delay are now part of the base, I just handle all the special case containers with scripts.
  6. I was troubleshooting a container, blowing it away totally in the image and appdata and readding it from previous. I expected adding a fresh container to show up either at the top or bottom of the container list with auto start and delay not set. Instead it showed up in previous order set to auto start with the delay. Also, the autostart list has some containers which I haven't had installed for a very long time, and I'm fairly certain I messed with auto start order and timing since then, but the containers still have templates so they show up in previous apps. Non existing where, exactly? If it's checked against currently installed containers, that doesn't seem to be working as intended. If it's checked against saved templates, then a previously installed container that hasn't had its template removed will stay in the list, which is what I seem to be seeing.
  7. Well, that certainly explains the behaviour I've encountered. Apparently that list is never pruned, there are some containers in there that I haven't had installed for a long time. Lesson here is to turn off autostart and save state before removing a container. @Squid, could you possibly back up and restore that file with CA backup / restore?
  8. Depends on whether the sector count reduction is hardware or software. If it's a different model like some drives that were only sold as USB externals, then you will need a new drive. If it's a HPA issue, then you need to figure out why the HPA was set so you can be sure it won't happen again, and remove the HPA. Google unraid hpa if you have no clue what I'm talking about.
  9. Where those settings are stored is not something I have researched, but in my experience if you simply remove a container and reinstall it using CA previous apps, it will indeed come back in the same sorted order with the delay. However... like I said, I don't know where it's getting that info, so it's possible that won't always happen depending on what all gets blown away before you recreate things. That's a question for @Squid
  10. So all your hard drives are connected to a single lead from the PSU? That's not good.
  11. Doesn't mean it's not an issue. QC for computer stuff is pretty much ship it and let the end user find the dud. Also, are all drives plugged directly in to the PSU cables, or do you have backplane or power splitters involved?
  12. And the response is, since people are already not happy with the perceived slowness of fixes and new features in the core product, adding a whole new layer of problems and complexities for limetech to deal with will slow releases down even more, causing even more complaining about things not being kept up to date. If limetech was responsible for the nvidia build as well as the core product, you would see less timely progress, not more. Limetech is absorbing more and more of the community pioneered features as time goes on, at some point they may very well decide to start doing nvidia drivers if they feel it's a good use of their limited resources. That time is not soon™.
  13. https://forums.unraid.net/topic/57181-docker-faq/?tab=comments#comment-566086
  14. Do you have some extenuating circumstance where you need to keep parity valid? Like Johnnie said, the rebuild parity method is much faster.
  15. There's your answer. The armor security suite is actively checking your network for vulnerabilities. Disable the armor and the log entries will stop. I tried to tell you that earlier in this thread.
  16. At your leisure be sure to do a non-correcting check to ensure all is good. The last check on file should ALWAYS be a non-correcting check showing zero errors. If that situation changes, you need to investigate why and correct the issue until you get there. A zero error parity check is your assurance that if a disk fails, it will be emulated and 100% rebuildable.
  17. With the array stopped, change the number of cache slots to 1. That will enable XFS for the cache drive.
  18. Cheaper to run a wire from one PSU to the other, connecting the green power signal lines. That way both PSU's will come up and shut down at the same time.
  19. I would still do a correcting parity check, just to make sure your interim steps didn't get things out of sync. I'd expect a (large) handful of parity corrections, and a subsequent non-correcting check should come up with zero errors. If it doesn't, collect diagnostics and attach them here for analysis.
  20. At the time of this post, LTS is 5.6.42, so no, LTS is several versions older. LTS is definitely the recommended version, unless you enjoy being a beta tester. Unifi has a pretty consistent history of breaking things with new releases, so stick with what works, especially if this is a production setting. If it's your testing lab, go right ahead with latest, but be ready for havoc with any given update.
  21. You certainly can use XFS for a single member cache. It's only when you have multiple cache device slots defined that you are forced to use BTRFS, because at the moment XFS doesn't support RAID volumes. The major downside to BTRFS is that it seems to be more brittle or fragile than XFS. What I mean by that is a lack of tolerance for hardware or software errors, and the recovery options for broken BTRFS volumes aren't as robust as the tools available for XFS, so having a comprehensive backup strategy is, as always, a high priority. So, if you are running server grade hardware with robust power conditioning, BTRFS has more options and features than XFS.
  22. The problem with trying to extend coverage with a single super duper Wifi is that the radio communication is two way. Antenna gain at the AP end can only get you so much sensitivity, you also have to deal with the radio and antenna of the client. Sometimes it's just way more effective to add an AP to gain coverage. The beauty of the Unifi AP setup is that you get single point management for all your AP's that just works, as long as you stay on the LT branch of the controller software.
×
×
  • Create New...