JonathanM

Moderators
  • Posts

    16125
  • Joined

  • Last visited

  • Days Won

    65

Community Answers

  1. JonathanM's post in Docker start order and delays not working was marked as the answer   
    When the docker service starts, yes.
  2. JonathanM's post in Device is Missing (Disabled), Contents Emulated - But No Data was marked as the answer   
    I'm having a hard time thinking of a situation where an emulated drive should ever be formatted. If it's unmountable it needs a file system check. Normally an emulated drive will be indistinguishable from the drive that has been dropped because of a write error, as long as parity was completely in sync when the drive was dropped. All the normal things you can do to a physical drive can be done to the emulated drive, such as reading the existing data, writing new data, formatting it, scanning for filesystem errors, etc.
  3. JonathanM's post in How to enlarge a VM disk was marked as the answer   
    How I handle this sort of thing is a dedicated VM with all the good tools, like gparted and such. Temporarily assign the target vdisk as a second drive to your utility VM, load up the disk utilities and do what you need to do.
  4. JonathanM's post in Windows 11 VNC not always refreshing was marked as the answer   
    Install nomachine in the VM and your desktop. Use that instead of VNC.
  5. JonathanM's post in Is my Docker usage size reasonable given my containers? was marked as the answer   
    It's not so much how many, but which ones specifically, and what paths are being written to by the apps inside the containers. The app paths aren't something that is viewable in Unraid, you must look at the app's internal configurations and verify that any path storing data that needs to be persistent is mapped to a spot on the array or appdata. The mappings are easily viewable using the Unraid GUI, but the internal app paths to compare them to aren't.
     
    All that said, your screenshot doesn't have anything that jumps out at me as bad, you just have some hefty containers. I'd bump up to 30GB and keep an eye on things, if the sizes stay relatively the same over several weeks you are good. Every time a container is updated, the newly installed parts are added to the image, then the newly unused parts are removed. It's the time between downloading the updates and cleaning things up that causes the warning when you are close to the size limit. Filling the image can be fatal though, so it's wise to keep a healthy margin of free space for the update process, and keep an eye on it to make sure the usage isn't creeping up over time.
  6. JonathanM's post in Strange CPU spikes. was marked as the answer   
    Try disabling cache_dirs
  7. JonathanM's post in Unraid Can't See Remote SMB Shares in Main Under Boot Devices was marked as the answer   
    Unassigned Devices plugin
  8. JonathanM's post in Unable to map ports on bridged containers was marked as the answer   
    Port mappings only apply if the container is bridged to the host's (Unraid) IP address. Since you assigned a specific IP to the container, all ports are open on that IP, so any changes need to be made in the container app config.
  9. JonathanM's post in PR2100 Migration was marked as the answer   
    Probably not, but that depends on if the disks have a standard partition layout and a file system that Unraid can read.
     
    In NO case I can imagine can they be put into the main array without losing data.
     
    If it is a healthy RAID1 and you don't mind breaking the mirror you could always see if one of the disks will mount using the Unassigned Devices plugin. Worst case you would need to put the disk back and allow the WD NAS to rebuild it. If either of the drives isn't perfectly healthy this very well end up with data loss. I wouldn't test this unless you have a full backup of any data you can't afford to lose.
     
    If Unassigned devices can't mount it (probably not), I see no other option besides copying over the LAN.
     
    BTW, Unraid (or any RAID) is not a substitute for backup. It can only rebuild a failed drive, it can't uncorrupt data or fix deletion. I'd copy the data and leave the old NAS as a backup destination.
  10. JonathanM's post in Mirror existing cache drive or split contents was marked as the answer   
    My personal preference is to set it up as another pool, and divide the workload however makes the most sense in your specific scenario. Whatever setup you finally choose, be sure to keep a backup routine going, just because you have a redundant device doesn't mean backups aren't needed. Appdata backup to an array disk is a good start.
  11. JonathanM's post in How to rebuild/check parity information is correct for array drives after a power outage? was marked as the answer   
    Formatted is NOT blank, it has an empty filesystem with all the table of contents and file metadata structures ready to accept files.
     
    Parity doesn't have any concept of files, or any format for that matter. Moving files around on array disks doesn't invalidate parity as long as the array is started.
  12. JonathanM's post in Appropriate Use Case for UN-RAID? was marked as the answer   
    You can definitely use the drives you mentioned in Unraid, I was just pointing out the cost of setting up the number of SATA ports, power supply connections, and appropriate case  for 14 SATA drives that you listed would probably exceed the cost of a pair of new drives, 16TB is currently the sweet spot in my opinion for $/GB of storage, You can probably put together something that will work for much less, but it's going to use much more electricity on an ongoing basis, and have many more points of failure than new hardware.
     
    By all means, throw some used parts together and try the trial of Unraid. I'm not being sarcastic, it will probably do exactly what you want for now. I was just trying to prepare you for the inevitable feature creep.
     
    If you put the three 320GB spinner drives in the parity array, and set up two pools, one with the 1TB SSD and another with the 250GB SSD  that would be a nice start, you should be able to find an old board with 6 SATA ports. That would give you an entry point to see how Unraid works.
  13. JonathanM's post in Mount UNRAID shared dir/folder in MacOS VM as a disk? was marked as the answer   
    Yes on both counts. Be aware that vdisks are sparse by default, so don't overprovision without being acutely aware of the space actually in use. It's too easy to create a 1TB vdisk image on a disk with less than 1TB of real space available, and when the VM tries to allocate more than it can actually write to because the vdisk can't grow, it crashes the VM. Better to under allocate and move the vdisk to a larger space and reallocate as needed later.
     
    Backing up vdisks while the VM is running is tricky, as it's likely the file system as it appears to the VM may not be in a consistent state and a restoration from that state may require file system check and repair. If the VM is shut down, or the volume in question is unmounted inside the VM if it's a not a boot or system required volume, then it's no longer an issue. I'm not familiar enough with macos to know whether it's possible to release a mounted volume while the OS is still running.
     
    Depending on which type of file system the vdisk.img file resides, it should be possible to use snapshots in the host as well.
  14. JonathanM's post in How can I fix this setup without spending a fortune? (external drives through USB) was marked as the answer   
    Apparently you can get a right angle converter to use a normal 8x PCIe card like a LSI 9300-8e, which could net you a couple SAS external ports good for 4 drives each or more if plugged into a multiplier backplane.
     
    Disclaimer, while it looks like it might work from what I saw in the manual, I have no clue if it would actually function properly. The manual shows a video card, who knows if they neutered the PCIe slot so it only works with video.
  15. JonathanM's post in Docker Image file is full was marked as the answer   
  16. JonathanM's post in Corrupted usb, no backup of usb other than 2 years ago, Added new disks afterwards 0 Parity disks was marked as the answer   
    Nope. Just be VERY careful not to populate either of the parity slots and you should be fine.
  17. JonathanM's post in Slow Network Speeds was marked as the answer   
    Does that include only having one at a time plugged in? Having 2 physical ethernet connections requires very specific settings both in Unraid and on the managed switch end. Much easier to just use one port.
  18. JonathanM's post in Understandinging CPU modes. was marked as the answer   
    Unclear on what you are asking about CPU, Unraid uses linux standard KVM virtual machine stuff, CPU passthrough exposes features of the host CPU and limits usage to the assigned cores, you can also limit the hosts ability to access cores to keep them exclusively for use by VM's. The VM's motherboard is always emulated, you can pass through select PCIe or USB devices, which excludes them from use by the host.
     
    Unraid is a single payment up front, no ongoing payments for the OS license. Licenses purchased 10 years ago are still valid for current releases.
  19. JonathanM's post in Hardlinks across a Multi Pool Devices was marked as the answer   
    Each pool is a single entity with regards to file system, so hardlinks don't know or care about the individual disks in a pool.
     
    RAID0 with 2 members will give you double the space of the smallest member. Single profile will add the 2 members together. Either will cause a total loss of data on the pool if either member fails.
     
    New versions of Unraid allow ZFS as well as BTRFS for multi member pools. ZFS may be more reliable than BTRFS, I haven't had very good luck with BTRFS.
     
    Any file system change to the pool requires backing up any content as the format will erase all data. Changes in BTRFS profiles may be able to be done without reformatting, but backups are still recommended.
  20. JonathanM's post in Unraid corrupting large files on any file operation. was marked as the answer   
    Since you seem to be able to trigger the issue, it should be fairly straightforward. Assuming multiple sticks of RAM, remove half, run test, remove other half and replace with prior removed sticks, repeat test.
     
    Passing doesn't mean the RAM is good, it means the test didn't trigger an error. Only a failed test is fully conclusive.
     
    Also, since the CPU is more directly involved with the RAM sticks on newer builds, loading up the CPU can uncover RAM issues.
     
    If it's not clear from my approach, I suspect your issue is certainly within the RAM's chain of custody of the data. Random(ish) corruption is almost always RAM related.
  21. JonathanM's post in Readonly Public Shares was marked as the answer   
    Does that answer your question?
  22. JonathanM's post in Run script when specific docker starts was marked as the answer   
    Not that I can think of, but you can certainly start a container from a script. Disable the auto start for that container on the docker page and script the startup separately.
    schedule at array start
    sleep however long you want to wait
    docker start containername
    run the rest of your commands.
  23. JonathanM's post in Out of disk space? was marked as the answer   
    You could...
     
    relax the affected share split levels and disk exclusions and verify your minimum free space settings
     
    manually move files from disk to disk to free up the space needed
     
    upgrade disk to larger model
     
    add disk
     
    You are going to keep fighting with this until you add more space or delete unwanted items to free up space. Converting h264 media to h265 can free up loads of space as well.
    I recommend reading through the help tips on the share page in the GUI, it might help you figure out specifically what is the immediate issue, as well as give you tools to manage it on an ongoing basis. You are running out of space, you will need to actively manage your share allocations until you get more space.
  24. JonathanM's post in Unraid share doesn't accept correct password... was marked as the answer   
    to mount a share you must use a valid user, not root
    to connect to the console, you must use root, not one of the users
    use that path as the rsync destination, assuming it mounted correctly and you see the contents of the unraid share at that local path.
  25. JonathanM's post in Connect to a VM using UNRAID own VNC server/repeater/proxy/or-whatever-it-is... was marked as the answer   
    What address and port are you trying in guac? I know a plain VNC client works fine for me using the server ip and the normal 5900, 5901, 5902, etc depending on what order the VM's were started. The VMS tab graphics column shows the port.