Underscoreus

Members
  • Posts

    20
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Underscoreus's Achievements

Noob

Noob (1/14)

5

Reputation

1

Community Answers

  1. Alright! It took me a while to get all the components, assemble them and then pull the trigger but I have now emerged on the other end, transfer completed and would like to fullfill my promise about a post-mortem report about how the process went and the issues, as well as the solutions, I encountered. First off, a huge thank you to JonathanM for his guide further up in the thread, it was invaluable. As for the "nuances and dragons" I encountered along my journey: After I updated my Unraid version from 6.9.2 to 6.12.4 the GPU passthrough to my VM stopped working. After looking on the forum, this seems to be a wide spread problem with no real conclusive way to solve it. I tried to get it working again for a little while but since I wasn't going to use the GPU in my new setup I let it go. The motherboard in my new setup did not allow me to boot using legacy mode no matter what I tried (double checking the USB was the first and only thing in the boot order, disabling fast boot, disabling XHCI Hand-of, Enabling Legacy USB Support and making sure the USB is plugged in to a USB 2.0 port on the motherboard). What ended up solving the issue was changing the name of the folder on the Unraid Boot USB from EFI- to EFI to enable UEFI boot, and just like that it started working exactly as expected. Note that the USB was always visible in the BIOS, but whenever I tried booting to it I was just brought back to the BIOS screen again. Also, the boot times of my new machine was very slow, even with fast boot enabled, but disabling XHCI Hand-of seems to have massively improved the boot times. Not sure what it does or why, but if you find your new machine slow to boot, maybe disabling this could be an idea to try. After booting in to Unraid my new drives were initially not showing up in the Main overview. This was due to the fact that they were all plugged into a SATA expansion card and the firmware on the sata controller was apparently old. I downloaded the new firmware and installation software from a third-party site and went ahead and flashed the new firmware to the controller. After doing so the expansion card worked a treat and the drives immidiately showed up in my overview. To update the firmware on this card you will need another PC running Windows, as the firmware upgrading tool is only windows compatible. Another requirement is that the chipset on the motherboard can't be a 600-series chipset (Atleast at the time of writing). This is the chipset used by the latest 13000 and 14000 series processors from Intel. If you have a lot of other PCIE cards plugged in to your Windows machine that you want to use to update the firmware on the card you might need to remove some of the other PCIE expansion cards in your system. I had both a USB 3.0 expansion card, an ethernet adapter and a GPU plugged in to my machine and it caused the Sata controller card to not be visible in the firmware updating software, possibly due to there not being enough PCIE lanes on my CPU. To solve this problem I simply turned off the machine, unplugged the USB and Ethernet expansion cards, rebooted and tried again, and just like that the card showed up and I was able to update the firmware. Thank god for this reply from da_stingo on the forum, without it I would have never thought to try to update the firmware and assumed my Sata expansion card was a lost cause. One last thing to note about this SATA expansion subject. Apparently some expansion cards, depending on the chipset they use, don't play nice with Unraid/Linux. This is something I was completely unaware of and it was only by luck that I ended up buying an expansion card with a copatible/reccomended chipset. If you are planing on doing this, please have a look at this forum thread for reccomendations on which chipsets to look out for before buying. Slight ammendment that was needed to the great guide provided by JonathanM mentioned above, I was unable to assign the cache pool as the secondary storage and the array as the primary storage for the shares that were hosted on my cache drive so instead I simply kept the cache pool as the primary storage location, set the array as the secondary storage location and just reversed the transfer direction to go from the cache to the array, which yielded the same end result of transfering the files from my cache pool to my array. Another issue I encountered when doing the moving was that some data did not want to move, namely the docker and virtio folders in my system share, as well as all my empty shares. This turned out to be because I had the CA Mover plugin installed and running. Once I removed that plugin and retried the mover it moved all the remaining files in my system share as well as recreated the empty folders/shares in my new ZFS pool without issue. Thanks to this post on the forum for giving me the solution to this specific problem. While running mover I encountered some files that had too long filenames to be transfered by mover from my disks to my zfs pool causing some "filename too long" errors to appear in the log terminal. It was fairly easy to fix by opening a terminal and renaming the offending files using the mv command. Hot tip (that is probably very common knowledge), if your file name or file path contains spaces, just wrap it in "", like this: "/mnt/disk1/this path has spaces/file.txt" Even after all that I needed to do some manual moving of a few files that were persistently sticking to my cache drive using either mv or the rsync commands. Most of these were temp files in a few different locations that I probably could have done without but I figured since there were only a few folders I would try to get them transfered as well. And that was that! Over all a fairly smooth transition, all things considered! Hopefully this final answer can be of help to someone in the future that are planning on doing a similar transition or if someone has any of these individual problems. I'll mark this final post as the solution so it becomes more visible (Not sure if I can mark multiple posts as solutions) but the true heroes in all of this are JonathanM for the amazing guide he made, JorgeB for general advice and the great list of compatible Sata controllers as well as the other people in the other forum threads that actually had all the answeres. A huge thank you and shout out to them!!
  2. Hello! I'm doing some moving around of my files and I've encountered some small hurdles and wanted to ask to be sure. I'm trying to eventually transition all the data on my array and on my cache drive into a ZFS pool I've set up. I got some advice from JonathanM over in this thread here which have been really helpful so far. The small issue I'm currently facing is that when I try to configure the shares that currently have my cache drive as primary storage I can't configure them as reccomended here with the array as primary, cache as secondary and the mover action from secondary (cache) to primary (array). The GUI only lets me change the primary storage from cache to array but does not allow me to set secondary to cache, its not avalible in the dropdown menu. So my question for a workaround, would keeping my primary storage as the cache and secondary storage as the array with the mover action being to move from primary (cache) to secondary (array), be the same as setting my primary storage as the array and changing my secondary storage to the cache and setting the mover action from secondary (cache) to primary (array)? Logically in my mind this should yield the exact same result, but I'm not sure if I'm missing some info about what setting primary and secondary storage does, if it's just a personal preference for where the primary and secondary storage location of data is supposed to be or if this also stores some other data or sets some other variables for something. Thanks for the read!
  3. It's probably not going to affect me since I'm not planning on immidiately expanding my pool after switching drives, but out of curiosity, what kind of issues are there with ZFS pool expansion? I couldn't find anything specific online other than not being able to expand a ZFS pool like you do with a btrfs or xfs pool/array but instead having to add another VDEV with the same amount of drives as the existing one, which is just how ZFS works.
  4. Alright, thanks for the confirming. Hopefully shouldn't affect me too much since I'm going with the pools approach.
  5. So this is only an issue when ZFS is setup using array disks and not if I setup ZFS using pools, is that correct? Also, is this bug still active in 6.12.4 or or has it been fixed? I didn't see it mentioned in the known issues section for any of the 6.12 versions released so far.
  6. Thank you so much for the helpful overview! I'll probably be back either in this thread or in a separate thread for any specific issues or questions that arrise. Hopefully there wont be too many of them. I'm unsure if I should mark this thread as solved now and just add an addendum post with any additional info I required when doing the transfer once I finish the process, or if I should wait to mark the thread as solved till the process is finished.
  7. My docker system and containers are stored on in my cache pool but my VM files are not. Since the cache drive was kind of small I chose not to store them there since it would more or less fill it. In the new setup I might be looking at just having everything be stored in the ZFS pool instead of having just a few files in a separate pool, just for simplicity.
  8. Alright, good to know! Ah, alright, I'll be sure to get either a spare drive or like you suggested just a spare USB stick to assign to the main array. I currently have a 250GB cache pool set up that I'm not really using, but I haven't pulled the trigger on all the components and set up the new ZFS pool yet since I wanted to ask these questions first in case some of my hardware choices were unwise or if there were major issues with actually going through with this operation on the software side.
  9. I used poor wording, let me try to clarify: I'm moving from a server using 4 array devices (HDDs) to a server using 4 SSDs in a pool (these 4 SSDs will be the 870 evos). So poor wording of me to say "corresponding SSD in the new array", I should have said "corresponding SSD in the new pool" or "corresponding SSD in the new setup". I was trying to ask if I should just copy the content from a harddrive, to a corresponding SSD. I'm not sure if each drive is even exposed in the file system when you configure a ZFS pool or if it just shows up as a single folder, maybe that is where the confusion comes from.
  10. I was planing on going with the Samsung 870 EVO 2TB or 4TB since it uses MLC flash so hopefully it'll last a bit longer than QLC SSDs.
  11. Why am I going over to ZFS from XFS? As far as I understand, Unraid, using XFS, does not support SSD trim, which would cause more wear on the drives over time. Whilst if I use ZFS I can enable trimming.
  12. Hey everyone, I'm planning to migrate my entire Unraid server to a new box (new motherboard, cpu, RAM etc.) as well as migrating the data on to a new set of SSD drives, in the process going from a set of 4 array devices (1 parity, 3 "active") using xfs to a pool of 4 drives using zfs (RAIDz, 1 parity, 3 "active"). I am looking to transfer all of my data (only around 5TB in my case) as well as all of my VM's and preferably all of my dockers as well. I was hoping you guys could help me out by answering some questions I had about the process, the more I look into it and think about it, the more I'm not sure if I'm out of my depth with this project: Are there any plugins or even dockers to help with/automate this process? I know you can get rsync dockers and stuff like that but I was more wondering if there were tools or processes specifically designed to move entire Unraid servers onto new drives, that would update all the paths for all my shares, VM's, dockers etc. Or is this going to be a pretty manual process of doing a copy of all the files onto the new drives and manually going through and changing the paths of all my stuff to point to the same files on the new drives? If there does exist any software to automate this transition, will it be hindered by the fact that I am going from an array of drives using xfs to a pool of drives using zfs? If the solution is to manually copy the files over, what would be the best way to do so? Clone the content of each drive in the old array to a corresponding SSD in the new array? Just copy the entire content of the "user" folder to the new pool? Is the files even really stored in the user folder or is that just symlinks to what is actually stored on each disk? Is a specific tool better to use in the scenario I have to manually move all my files? Would there be much of a difference setting up and using something like rsync vs just loading up a krusader docker and doing the copy via the gui there? When transferring to the new hardware I'll be unassigning all the cpu pinning and removing all references to hardware that won't be carried over from my VM configs, but is there anything else I need to be sure to do before pulling the plug? Presumably stuff like static IP's won't matter since they are assigned to a specific ethernet port on the motherboard that won't be carried over, so I'll just need to configure the port on the new motherboard with a static IP instead? Hardware wise, neither of my motherboards have 8 SATA ports so to have all 8 drives plugged in for the initial file transfer I'm just thinking of getting a pcie SATA expansion card, are there any known issues with Unraid not working with these? I'm thinking of getting this card from startech. If you could help me answer any of these questions I'd be really grateful! I'm not 100% sure of all the proper lingo so if anything is unclear please leave a question and I'll try to amend the post with an update. Thanks for the read
  13. Slight bump Anyone heard of this happening before?
  14. Hey guys (and gals)! For the last few months I have been having some issues connecting to my unraid shares with my two Ubuntu 18.04 workstations. I've had a look through ALOT of articles/forum threads on how to connect SMB shares on linux through the fstab file but none of them seemed to do the trick. The only way I managed to make it work was to set the "version" flag to 1.0 in the mount command, meaning that, presumably, I am currently using the SMB 1.0 protocol to connect my machines to my shares. If I am not mistaken the most recent version used in windows 10 is version 3.0? This fix works for most day to day activities but I am discovering oddeties with the setup, like some programs that make temp files are not deleting them when saving to the server but are deleting them if I try to save locally aswell as some programs having erradic and unpredictable bugs when dealing with saving/loading stuff from the server. I don't know if this comes from the fact that I am connecting with SMB 1.0 but its the only lead I have, so I was wondering if anyone here knew how I could go about fixing this issue? If there is something I need to configure on the workstation side or some repositories I need to download to make later versions work or if there is configuration needed on the unraid side to make it more compatible with ubuntu. I have a windows 10 laptop and it is able to connect to the share no problem, but I have no idea what version of the SMB protocol it is using. I seem to remember having read that unraid uses 2.0 and windows transitioned to 3.0 and it caused mounting shares on windows to be harder than before or something. Any tips or advice are greatly appriciated! Sorry if this post is leaning too heavily towards being linux related instead of being directly unraid related. Running Unraid 6.7.2, Unraid OS Plus (If more hardware specific details are required I'll provide it) Thanks for the read, And stay safe out there!
  15. Fantastic, it works! Thanks for the help and the explanation! (Oops, forgot to press post on this yeaterday, sorry)