Jump to content

JonathanM

Moderators
  • Posts

    16,700
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. I doubt that you will see a speed decrease, in fact I suspect things will work much faster with the memory working in sync with the processor. Pushing speeds past spec causes multiple retries for data that is corrupted but correctable and crashes when the corruption is uncorrectable, which should be eliminated when things are running in spec.
  2. They can, but the fact that you are asking means that you shouldn't, at least until you understand the consequences of sharing the same engine for multiple apps. The short and sweet of it is, if you mess up the database for one app, and don't have the technical knowledge to manually fix it, you will be stuck losing ALL your databases to recover the one rogue app. Containers share resources, disk space, cpu and ram. Whether you spin up 2 db containers or one, the server usage is the same, each additional copy of the container only consumes the space and other resources used by the additional data of the apps using the database, just like a single container would. The only downside to multiple containers is keeping track of the port numbers, but there are 10s of thousands of port numbers at your disposal. If you want to educate yourself on database management, on the surface it's not that bad, but it's a deep field of knowledge. Much easier to use docker containers the way they were meant to be used, as individual building blocks that can be discarded and replaced as needed without effecting other containers.
  3. Be VERY careful when implementing something like this script. I would add a kill feature of some sort, where the script looks for the existence of an arbitrary file on the flash drive before starting the VM, that way you could disable the auto start for troubleshooting. Blindly force starting a VM on a schedule is a bad idea for multiple reasons.
  4. If you get to the point of 2 XFS and 1 Rfs, then the obvious step is to upgrade one of the XFS drives, copy the content from the ReiserFS drive to the new free space, then format the ReiserFS drive to XFS. I know that will leave you with an older drive in the array and a loose 4TB, but you could always hang on to the 4TB as a backup, and the next 8TB replaces the oldest drive still in the array.
  5. As long as you are willing to lose anything that was written to that drive slot after the upgrade process started, then it should be pretty straightforward to get back to the old state. If NOTHING was written to the array after the rebuild started, then parity should still be valid except for a few sectors. Doing a new config, being extra careful to make sure all the drives are in the correct slot assignments, and selecting parity is already valid, should get you back. You WILL need to do a correcting parity check to get back in sync, but things should be relatively close so the errors should be few.
  6. Let me guess, you "chose" to upgrade the ReiserFS drive?🤣 I know, hindsight and all that. When was your last parity check with zero errors?
  7. Actually, you can still convert your 4TB drives using the free space the 8TB gave you, but the 8TB will have to wait, UNLESS you have enough space to move things around and free up the 8TB. It's a pity though, because ReiserFS is really bad at handling 8TB drives, the wait times for new files to get space allocated can be horrible. However... Now that I think about it, what I personally would do in your situation is this. 1. Finish the rebuild on the 8TB, verify everything is there and working properly, and have a non-correcting parity check with zero errors. 2. Attach the 4TB using the USB dock, and mount it using Unassigned Devices. 3. Assuming the old 4TB mounts properly, do a binary file compare to the freshly rebuilt 8TB. If that completes with no error, format the 8TB to XFS, and copy the 4TB content back over. 4. Copy the content of one of the array 4TB drives to the remaining space on the 8TB, format the 4TB you just copied. Copy the remaining 4TB to the freshly XFS 4TB. Format the last 4TB. This is going to take days, maybe a week or more if you insist on moving files instead of copying. If the removed 4TB won't mount in the USB dock with Unassigned devices, then I'd format it using destructive mode in UD and use it as the temporary location to hold the content of the 8TB while you format it. All of this assumes good backups, and a SOLID understanding of what each step is for and how to accomplish it. DON'T dive in blind and assume, ask questions until you have a plan with specific details. What I've outlined is just that, an outline.
  8. Since this is posted on the Unraid forums, I'm going to guess he's running a VM inside of Unraid, which uses the KVM hypervisor. Which version of Unraid are you having issues with?
  9. Unfortunately I don't think Unraid is to the point of being safe to use for people with no desire to learn some IT skills.
  10. JonathanM

    WiFi 6e

    It's not so much a rule as a support nightmare. If there emerges a popular adapter with consistent linux support from the manufacturer submitting working drivers for the current kernel, then yes, it's quite possible for that specific adapter to be supported. The current issues with realtek wired adapters are bad enough, trying to support wifi generally just isn't going to happen, AFAIK. It's only just recently that some specific USB wired adapters could be used.
  11. This is the critical part of the question. IF your motherboard has multiple controllers that can be passed through separately, so that your boot USB is on a controller that is NOT passed, or if you pass a separate USB controller PCIE card, then yes, you can attach hubs to that controller to increase the number of ports available on that controller. From the way your question is worded, I'm not sure if you are confusing ports vs controllers. Typically each controller has multiple USB ports that it controls, many motherboards only have a single controller that runs all the ports on the board, and you can't pass through the same controller that owns the port with the USB boot stick.
  12. Depends on how you approach it. I've never been denied an RMA, regardless of reason. Performance issues regardless of SMART status is a valid reason to RMA.
  13. Install nomachine in the VM and client computer, and connect that way instead of the built in VNC. You should really only be using the Unraid VNC for tasks that require local console in the VM, normal daily driver access should be done using remote access hosted in the VM.
  14. Probably got dropped on the floor at some point between manufacturing and now. If possible you need to return for refund and get a new drive, manufacturers warranty will likely replace it with a refurb.
  15. No other with as catastrophic results that I am aware of. Docker container template customizations are on the flash as well, so keeping current backups after you add new containers is good. Pretty much any time you muck around in the Unraid GUI and make configuration changes, like network, containers, users, share permissions, etc, you need to make a new backup. It's just those type of changes don't tend to irreversibly erase drive data if you accidentally use an older version.
  16. That method works, but it's easier to click on the flash in the main GUI page and select the flash backup button.
  17. After you accomplish that task, IMMEDIATELY take a new flash drive backup, and destroy or mark for destruction any previous flash drive backups. Reason being, if, in the fog of war, you have a failure and use an older flash backup to get up and running, it is very possible to make the array think that data drive should still be in the parity position, and overwrite it irrecoverably erasing anything that was on the disk.
  18. Once the parity build is done, I'd run an extended smart test on disk 3. I wouldn't be happy with a brand new drive kicking out errors like that. It's possible that drive could have been the cause of your initial issues.
  19. What is the error on disk 3?
×
×
  • Create New...