Jump to content

JonathanM

Moderators
  • Posts

    16,741
  • Joined

  • Last visited

  • Days Won

    66

Everything posted by JonathanM

  1. Especially since all the other competitors in this custom NAS space support it!
  2. Download the latest version of memtest directly from https://www.memtest86.com and create a boot stick with only memtest.
  3. My personal preference is to set it up as another pool, and divide the workload however makes the most sense in your specific scenario. Whatever setup you finally choose, be sure to keep a backup routine going, just because you have a redundant device doesn't mean backups aren't needed. Appdata backup to an array disk is a good start.
  4. Does that include only having one at a time plugged in? Having 2 physical ethernet connections requires very specific settings both in Unraid and on the managed switch end. Much easier to just use one port.
  5. Formatting a disk erases it. No way around that. IF you have enough free space on other drives in the array, yes you can move data off of one of the disks, emptying it so formatting doesn't lose data, it's empty after all. This statement is still VERY true.
  6. Depends how much you care if your parity and target data drive are always spun up instead of just the target drive.
  7. It may be too old to transcode the files you are working with. Do you have any h.264 files to try?
  8. Unclear on what you are asking about CPU, Unraid uses linux standard KVM virtual machine stuff, CPU passthrough exposes features of the host CPU and limits usage to the assigned cores, you can also limit the hosts ability to access cores to keep them exclusively for use by VM's. The VM's motherboard is always emulated, you can pass through select PCIe or USB devices, which excludes them from use by the host. Unraid is a single payment up front, no ongoing payments for the OS license. Licenses purchased 10 years ago are still valid for current releases.
  9. No more important than any other file system on Unraid. Unraid runs the entire OS from RAM. ECC theoretically freezes the system if it can't correct an error, limiting damage to files in use at that point. Regular RAM can keep running silently corrupting data in the background. Doesn't much matter which filesystem. If you are ok running Unraid with standard RAM, don't stress over ZFS. 6.12.X added ZFS, either for single member parity array disks, or pools, which can have multiple members.
  10. Each pool is a single entity with regards to file system, so hardlinks don't know or care about the individual disks in a pool. RAID0 with 2 members will give you double the space of the smallest member. Single profile will add the 2 members together. Either will cause a total loss of data on the pool if either member fails. New versions of Unraid allow ZFS as well as BTRFS for multi member pools. ZFS may be more reliable than BTRFS, I haven't had very good luck with BTRFS. Any file system change to the pool requires backing up any content as the format will erase all data. Changes in BTRFS profiles may be able to be done without reformatting, but backups are still recommended.
  11. Not saying it's not dead, it's a real possibility, but before you make the call be sure it's not a port or configuration thing. There are multiple USB stick testing programs designed to weed out poor quality and fake capacity drives, it could be useful to fire up a couple test programs and see what they say. If the drives fail on a different system, return them.
  12. 10,000 ft view of what you have presented. None of the IDE drives are worth fooling with, the amount of power to keep them spinning, plus the need for a handful of obsolete controllers... The spinning rust SATA stuff is similar, although you could easily procure a controller with enough ports, it won't come cheap, and once again all those hungry spindle motors... The SSD crop is a little more usable, albeit still a little port hoggy for the amount of total storage. There has been very little usable speed increase perceptible with the last 5 years or so of CPU changes. Most of the gains are in power usage per unit of processing done. Yes, the newer stuff benchmarks much better, but hardly anyone gets their jollies watching benchmarks, and the real world gains are hard to see for most tasks. Transcoding media is a big exception. Intel 6th gen is plenty for a general use rig, if a little more power hungry than the new stuff per unit of work done. The way I see it, you need to make a decision, either put together a case / power supply / SATA controller ports (motherboards can have plenty, depending on your definition of plenty) to support a small menagerie of old drives, or pony up a few hundred bucks on a pair of decent sized modern drives for mass storage and use some of the SSD's as working fast storage that will fit in just about any case. A pair of refurb 16TB drives can be had for around $350 or less. Your total crop of drives comes to what looks like about 4TB, I didn't actually add them up. Theoretically you should be able to change your handle to something more to your liking in your profile preferences. I haven't tested it, but the powers that be said it should be possible.
  13. Before you buy, please ensure that it can do true IT mode. I'm not up on all the latest part numbers, but I'd be more inclined to think a 9300 would be only IT, where a 9340 may come in IR mode, and may or may not be changeable to IT.
  14. If you can't figure it out, here is the reset procedure. https://docs.unraid.net/unraid-os/manual/troubleshooting/#lost-root-password
  15. Only root can log in to the GUI or SSH, perhaps you are remembering one of the share users and passwords instead of root?
  16. That is no longer correct with new versions of Unraid. I believe it should be /dev/md3p1
  17. Assuming the drive was in another physical slot that didn't supply 3.3V to those pins until after you had already assigned it to the logical slot, then it should work fine if you mask off those pins with Kapton tape. However... your description makes it sound like that's not the problem. The drive wouldn't suddenly stop spinning up because of the 3.3V issue if it hadn't been moved, and I'm unclear why you would assign the drive to the logical slot and then move it physically to another connection.
  18. While definitely possible to backup the vdisks, it requires shutting down the VM fully to get an accurate backup, which is not ideal. Much better to use a backup utility inside the VM to backup to a location on the array, just like you would a standard hardware based PC. I use UrBackup, it's been a lifesaver. Appdata has its own backup application in the app store, it works well with some attention and tuning on first deployment. Docker image should NOT need to be backed up, the whole point is that the appdata folders contain all the customization and content that isn't written to the array shares. The only exception currently is custom networks, so as long as you keep notes on any custom networks created and redo them before restoring your applications from previous apps in the app store, the docker image rebuilds itself in a matter of minutes. If you accidentally have a container writing settings and data INSIDE the docker image, you need to fix that, as it will create other issues besides restoring data in the event you need to recreate the docker image file. The system folder contains the VM definitions as well, but the appdata backup app has provisions to save those.
  19. Since you seem to be able to trigger the issue, it should be fairly straightforward. Assuming multiple sticks of RAM, remove half, run test, remove other half and replace with prior removed sticks, repeat test. Passing doesn't mean the RAM is good, it means the test didn't trigger an error. Only a failed test is fully conclusive. Also, since the CPU is more directly involved with the RAM sticks on newer builds, loading up the CPU can uncover RAM issues. If it's not clear from my approach, I suspect your issue is certainly within the RAM's chain of custody of the data. Random(ish) corruption is almost always RAM related.
×
×
  • Create New...