The Transplant

Members
  • Posts

    70
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

The Transplant's Achievements

Rookie

Rookie (2/14)

3

Reputation

1

Community Answers

  1. Thanks - I have ordered two of them and will report back.
  2. Opening up an old topic discussed here a couple of years ago - I am looking to expose 2 serial ports to a VM. I am moving from a dedicated windows machine into a VM. The dedicated machine is using an EdgePort 416. I did try plugging that in and trying to access it through the USB Manager but with 16 serial ports and 4 USB ports it was too much to figure out. It looks like the USB Manager kept cycling and finding the ports and then losing the ports. But since my needs are quite simple - I thought I would just go and purchase two USB serial converters and plug them in. Are there any recommendations that have worked for others? Thanks.
  3. Well I held my nose and went for it. My concern was that some dockers were clearly writing to the old cache even after updating all settings. In some cases this was because their config had hard coded references to the cache location. In others it seems that when it has an option of old a new cache at the same time it continues writing to the old cache for existing files and new cache for newly created files? Anyway I was able to move them all successfully. Now on to adding a mirror for the cache so I don't find myself in this situation again. One more question. When a docker has a hard coded reference to the cache drive, for example: /mnt/cache/appdata/openvpn-client. Wouldn't it make sense to change this to; /mnt/user/appdata/openvpn-client and then it would go wherever the cache is located? Thanks.
  4. I would love to follow this advice - if I felt confident enough that I wouldn't hose the server in doing this. As an example I see a Radarr folder in both old and new cache. Different files and folders. And date/time stamps indicating that they are both being updated now. What do I do here? Im getting close to deleting all of the dockers at this point as it appears you need to have a very strong understanding of what is going on here in order to fix this. But willing to hang in to see if this can be done without me having to reconfigure everything. Thanks. new cache: old cache:
  5. I have had relatively limited experience with hypervisors but it seems like a simple backup and restore should not be this difficult. But I am learning that unraid has come a long way but still requires a fairly deep understanding of what is going on behind the scenes to recover from any issues.
  6. ok, fixed the mover action on appdata. I did shutdown vms and dockers when running mover before. But with this new setting change should I try shutting them down again and running Mover or am I still going to have an issue of file versions? Thanks.
  7. That makes sense. So how do I compare these files? Using Krusader I see the folders on cache_specific. But where is the file on the array that I should be comparing this too? I see disk1-5. One of them has an appdata folder. It contains virtually nothing and the others don't. So where am I supposed to compare against? Thanks.
  8. Thanks for responding. I have looked at the logs - all they seem to show is a lot of this. cache_specific is the drive that I am trying to empty. Is there something else in the logs I should be looking for or does this help diagnose? Thanks. Feb 28 08:25:20 Odin move: move_object: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/audio-output-none-panel.svg File exists Feb 28 08:25:20 Odin move: file: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/btsync-gui-disconnected.svg Feb 28 08:25:20 Odin move: move_object: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/btsync-gui-disconnected.svg File exists Feb 28 08:25:20 Odin move: file: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/audio-recorder-on.svg Feb 28 08:25:20 Odin move: move_object: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/audio-recorder-on.svg File exists Feb 28 08:25:20 Odin move: file: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/btsync-gui-paused.svg Feb 28 08:25:20 Odin move: move_object: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/btsync-gui-paused.svg File exists Feb 28 08:25:20 Odin move: file: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/audio-recorder-paused.svg Feb 28 08:25:20 Odin move: move_object: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/audio-recorder-paused.svg File exists Feb 28 08:25:20 Odin move: file: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/clementine-85-playing.svg Feb 28 08:25:20 Odin move: move_object: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/clementine-85-playing.svg File exists Feb 28 08:25:20 Odin move: file: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/audio-recorder.svg Feb 28 08:25:20 Odin move: move_object: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/audio-recorder.svg File exists Feb 28 08:25:20 Odin move: file: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/changes-allow.svg Feb 28 08:25:20 Odin move: move_object: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/changes-allow.svg File exists Feb 28 08:25:20 Odin move: file: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/audio-volume-high-panel.svg Feb 28 08:25:20 Odin move: move_object: /mnt/cache_specific/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/24/status/audio-volume-high-panel.svg File exists
  9. Thanks - that did work. Definitely not an experience I would want to repeat. The reason I had to do this in the first place is because 1 of my 2 cache drives failed. I replaced it but I am still in the process of moving data off the second cache drive onto the new replacement, and then the plan is to mirror this drive to avoid issues like this. This is another daunting task - nothing seems to work as planned. If you feel inclined to take a look - here is the link to that thread. Thanks!
  10. I just added a new cache drive (1TB). My plan is to remove the second drive (240GB) and then replace it with another 1TB drive and then mirror it. I added the new cache and have pointed all shares at it as needed. I have read dozens of posts about doing this but everyone seems to be slightly different - either on an older version of Unraid or not quite my configuration. As a result I will post my details and hope I can get specific information on how to do this in my case as I don't want to be rebuilding my box. I have stopped dockers and VMs and run the mover - twice. It completed. However I am still seeing a bunch of data on the old cache drive. Domains on the old cache has a folder for an old VM and nothing in it. System contains libvirt.img and docker.img. Presumably these need to be moved to the new cache but can't figure out how to do that. The appdata folder on the old cache contains a lot of old stuff This is my docker list now I see a few dockers that have references to the old cache. Is it as simple as stopping these, updating the location, and copying the folder from cache_specific to cache? And when I look at other docker folders in the old cache that have been updated recently I see files being written to as I look at them So clearly not everything has been moved. All of this leaves me with a distinct lack of confidence on how to pull this drive right now. So hoping to get some pointers. Diagnostics attached. Thanks. odin-diagnostics-20240227-1707.zip
  11. Thanks @ghost82 - I did manage to restore the older file and get it running. But I had made some changes and want to get the newer image restored so will work through your suggestions above. The backup script I am using just seems to make a copy of the .img, xml, and .fd file. It did take me a while to realize that it was appending dates to the front of these files - so you could maintain multiple backups. This feels like something that should be native in the GUI for both dockers and VMs. Since I had not been through a crash like this it took me down for a number of days while I learned how to restore. A simple create new VM and click on restore and select the backup would have been nice, and from a software standpoint is remarkably simple to add? I appreciate the help!
  12. @ghost82 Following on this thread I am now in the process of restoring a VM. I am running the backup plugin and have some questions before I do this: I notice in the backup directory that I have several copies by date of the img, xml and fd file. But the newest img file does not have an xml or an fd file associated with it. Should I revert back to the next newest file set where all three exist? Or can I use the xml and fd files from an older backup set? My fd file is not named ovmf_vars.fd but 20240206_0300_5ab648ed-0c63-aa51-400c-277ece7bd277_VARS-pure-efi.fd. I assume that doesn't matter? Looking at the xml I see a few corrections I might need to make: /mnt/user/domains/Outlook/vdisk1.img - the image is currently in a backups folder - so I will move it to the corresponding folder in domains and leave this as is. Should I do anything with the 20240206_0300_5ab648ed-0c63-aa51-400c-277ece7bd277_VARS-pure-efi.fd file that is currently in the backups folder? Thanks!
  13. So I found the problem. My cache SSD was failing and did fail. I am sure there is some way for me to have seen this coming. But I didn't see any errors and I didn't see anything connecting the speed of the VM with an imminent failure on the cache drive.
  14. Which is essentially what I was doing. But if I implement my approach then I can (I think) prevent downtime from a cache drive failure. Not that my system is mission critical but it did take out my Home Assistant which is a little annoying.
  15. To be honest I wasn't aware that my cache drives were a single point of failure. Seems obvious now, but I learn as I go. I had 2 240GB SSDs and had dedicated one to VMs - the one that failed, and one to the other stuff like appdata, system, etc. I had hoped that I could do something with the spare SSD I have but it is too small to accommodate my VMs. So I ordered two 1TB SSDs that are arriving today. Here is my plan: The array is running right now and I have backups of my VMs. Install the first 1TB in place of the failed drive. Restore the VMs to this drive. Move all data from the existing 240GB SSD to this drive. Take our the 240GB and replace it with the second 1TB. Mirror the second 1TB to the first 1TB. Is my plan a good approach? This then should remove my single point of failure? Thanks.