oliver

Members
  • Posts

    38
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

oliver's Achievements

Noob

Noob (1/14)

0

Reputation

  1. i don't use unraid for time machine, couldn't say.
  2. all i have to say is wow....i made these changes to sonoma 14.4.1 /unraid 6.12.10 and the mac experience is far better. i have a large array and directory listings are almost instant.
  3. if i have multiple shares, some native windows, do you see any conflicts with the unraid specific ones and those? specifically from the changes to the mac nsmb.conf file?
  4. every time this runs, it fills my log up with these - pr 6 02:00:02 nas Docker Auto Update: Checking for available updates Apr 6 02:00:21 nas Docker Auto Update: Stopping heimdall Apr 6 02:00:29 nas kernel: docker0: port 8(veth383ba24) entered disabled state Apr 6 02:00:29 nas kernel: veth203f1ae: renamed from eth0 Apr 6 02:00:30 nas kernel: docker0: port 8(veth383ba24) entered disabled state Apr 6 02:00:30 nas kernel: device veth383ba24 left promiscuous mode Apr 6 02:00:30 nas kernel: docker0: port 8(veth383ba24) entered disabled state Apr 6 02:00:30 nas Docker Auto Update: Stopping wikijs Apr 6 02:00:36 nas kernel: docker0: port 1(vethc2a901a) entered disabled state Apr 6 02:00:36 nas kernel: veth3b21e64: renamed from eth0 Apr 6 02:00:37 nas kernel: docker0: port 1(vethc2a901a) entered disabled state Apr 6 02:00:37 nas kernel: device vethc2a901a left promiscuous mode Apr 6 02:00:37 nas kernel: docker0: port 1(vethc2a901a) entered disabled state Apr 6 02:00:37 nas Docker Auto Update: Installing Updates for hdd_scrutiny heimdall wikijs homeassistant Apr 6 02:05:52 nas Docker Auto Update: Restarting heimdall Apr 6 02:05:53 nas kernel: docker0: port 1(vethdf9918b) entered blocking state Apr 6 02:05:53 nas kernel: docker0: port 1(vethdf9918b) entered disabled state Apr 6 02:05:53 nas kernel: device vethdf9918b entered promiscuous mode Apr 6 02:05:57 nas kernel: eth0: renamed from veth7dd08c1 Apr 6 02:05:57 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethdf9918b: link becomes ready Apr 6 02:05:57 nas kernel: docker0: port 1(vethdf9918b) entered blocking state Apr 6 02:05:57 nas kernel: docker0: port 1(vethdf9918b) entered forwarding state Apr 6 02:05:58 nas Docker Auto Update: Restarting wikijs Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered blocking state Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered disabled state Apr 6 02:05:58 nas kernel: device veth36d8670 entered promiscuous mode Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered blocking state Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered forwarding state Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered disabled state Apr 6 02:06:05 nas kernel: eth0: renamed from veth818c2f7 Apr 6 02:06:05 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth36d8670: link becomes ready Apr 6 02:06:05 nas kernel: docker0: port 8(veth36d8670) entered blocking state Apr 6 02:06:05 nas kernel: docker0: port 8(veth36d8670) entered forwarding state Apr 6 02:06:08 nas Docker Auto Update: Community Applications Docker Autoupdate finished i assume this is normal since the containers are restarting? is there a way to suppress these log messages?
  5. Running this command - find /mnt/user -type d -name ".AppleDouble" -exec rm -rf {} \; I get tons of errors like this - EDIT: doh....it's because i had it in my veto config.
  6. I'm looking at the 2nd option (Clear Drive Then Remove Drive Method) in this guide - https://docs.unraid.net/legacy/FAQ/shrink-array/#the-clear-drive-then-remove-drive-method I've already used unbalance to empty it out. My question is this note - Does this mean if I format the drive, I can skip the lengthy clearing step (step and proceed straight to step 9? The note mentions that parity drive is 'updated accordingly' but im not sure what that means in this context. Is it an instant operation if formatting as opposed to clearing? or since the drive currently has 0 usage, can i just remove it + reconfig the array and have the parity remain valid? or does a non-empty disk not mean the same thing as all 0's and therefore parity must be rebuilt?
  7. Ok so I setup two trial arrays and did a small test of the transfer process with some spare drives. Reconfiguration seems to work quite well, creates the new shares. My last question is how does unraid handle a disk from another unraid system, which has the same shares and conflicting files? In my case, the only conflict is the 'system' share, which im not familiar with (looks like docker/vms). Will it cause problems if I merge the two arrays, which include the two different system shares?
  8. Questions on how exactly unraid works with existing drives and how the the new config option works. Let's say I pull a drive from array 1 and insert it into array 2. Now there's two things that can be done - Before starting array 2, I add the disk via the dropdown menu and click start array My understanding is that unraid will not simply allow you to add it array 1, it will format it, resulting in data loss of disk. Is this correct? If the above is correct, the proper procedure here is to create a new config, preserve existing assignments, rebuild parity. This will not result in data loss from the new disk? How does unraid reconcile different share names.? If the original disk from array 1 has a different root level share? When creating the new config, will it just add a new share with the matching name?
  9. The drives in the 170TB array are not to be included in the final array, sorry I should have emphasized that better. The new array will contain a smaller amount of drives with larger capacity. So I have to copy all of that data over and then entirely get rid of array 1.
  10. Asking for feedback for merging two large Unraid Arrays into one. Array # 1 - 176tb, made of 8tb drives. 2 parity. 1 tb SSD cache No docker/no VM's This is primary an archival array. Data is written to it once and rarely ever used. Array # 2 - 36TB, made of 18tb drives. 2 parity. Cache pool of 1 SSD and 1 NVME, mirrored. Running Docker containers and VM's. This is my main array and the one I want to be set as the primary new array. I am going to add a bunch of 18tb drives and move over contents from #1 to a new share on this array. The docker containers on this array are used quite a bit so I would prefer to minimize downtime to the array. The massive amount of volume in array 1 is the main factor here. Approach 1 - Migrate direct from array 1 to array 2 Add the new drives to array two Clear them Move the data from array #1 What do I do with parity? Leave my array with no redundancy? Or should I leave parity on since I'd have to do a rebuild at the end anyways. Is it faster to turn off parity entirely and then rebuild it rather than leave it on the whole time? Also a con of this approach is degraded performance. I don't really read/write intensive ops to Array #2, as long as Plex will still function, I think I should be ok? Approach 2 - Setup a new (temporary) array Setup a brand new array Move over data from Array #1 to these disks Import these disks to array #2 Rebuild parity Downside of this is I need new hardware to setup this temporary array but I think I can manage. Also, how would "import" a disk from one array to another? Do I just stop the array and it will let me add the disk? Also note that both array 1 and 2 are encrypted XFS shares, but have the same password. Also, presumably the share names need to be identical from the temporary array and the main array for folder structures to work? Can I use a trial version of unraid for this? Other questions - What's the best way to actually move the massive amount of data from array #1? I'm not in a huge hurry, I know it will take time, weeks possibly. Doing it disk by disk is not practical since there's 22 drives or so and I also would prefer to have parity in place since im doing full reads on a lot of older drives. This leaves network transfer as the only viable option. Software wise, how to handle this? Just load up Krusader via docker and send 170tb over SMB? How gracefully would it handle errors? Is there a better option? I have multiple NIC's, to avoid bottlenecking traffic, should I dedicated a private network between the machines? I have a 2.5GBE network. Other considerations: Both arrays are xfs-encrypted however have the same password. I would like to use the license from the #1 array, so I need to swap it with the license on array #2 when it's all done. Is this doable?
  11. Let's say I have two separate unraid instances. Each server has encrypted disks but both servers have the same password. Can I move a disk from one to the other and reset the array configuration and have it pick up the new disk without having to reformat?
  12. i just got this on 6.12.6. thanks, this did the trick. it's kind of sad how after all this time, unraid still cannot reliably shut an array down.
  13. I have a large array, 22 drives totaling 164TB with 2 parity and a 1TB cache. I'm using an HBA card with 4 ports as well as a maxed out SAS expander. Software wise, this array is used for archiving, very little activity. I write data to it once and it's barely ever read back or used. No VM's or docker containers. I've been contemplating swapping out my aging mini-itx motherboard for an ODROID-H3+. https://www.hardkernel.com/shop/odroid-h3-plus/ It's tiny, low energy, has 2.5GB ethernet. Now, how do I get my drives onto it? Well, it has: A m.2 -> pciex adapter like this: https://ameridroid.com/products/m-2-to-pcie-adapter-straight?variant=32064105578530&currency=USD&utm_medium=product_sync&utm_source=google&utm_content=sag_organic&utm_campaign=sag_organic&srsltid=AfmBOoqFL-28eJ3G9GuP385RA_gXZgmc9IJ0DHV_x8g7WRm4sNpJEMcR3wE The SAS expander doesn't need to be connected to the motherboard, it only needs power + connection to SAS card. I think the main question im left with is will this setup work with so many drives and how much of a performance hit are we talking? Given im either reading or writing to the drive only one at a time and at a very low rate, do you think this is doable? Probably the only problem I can think of is a parity check which will probably take a lot longer but I can live with that.
  14. i don't see this button, has it been removed since the post was made?