oliver

Members
  • Posts

    38
  • Joined

  • Last visited

Everything posted by oliver

  1. i don't use unraid for time machine, couldn't say.
  2. all i have to say is wow....i made these changes to sonoma 14.4.1 /unraid 6.12.10 and the mac experience is far better. i have a large array and directory listings are almost instant.
  3. if i have multiple shares, some native windows, do you see any conflicts with the unraid specific ones and those? specifically from the changes to the mac nsmb.conf file?
  4. every time this runs, it fills my log up with these - pr 6 02:00:02 nas Docker Auto Update: Checking for available updates Apr 6 02:00:21 nas Docker Auto Update: Stopping heimdall Apr 6 02:00:29 nas kernel: docker0: port 8(veth383ba24) entered disabled state Apr 6 02:00:29 nas kernel: veth203f1ae: renamed from eth0 Apr 6 02:00:30 nas kernel: docker0: port 8(veth383ba24) entered disabled state Apr 6 02:00:30 nas kernel: device veth383ba24 left promiscuous mode Apr 6 02:00:30 nas kernel: docker0: port 8(veth383ba24) entered disabled state Apr 6 02:00:30 nas Docker Auto Update: Stopping wikijs Apr 6 02:00:36 nas kernel: docker0: port 1(vethc2a901a) entered disabled state Apr 6 02:00:36 nas kernel: veth3b21e64: renamed from eth0 Apr 6 02:00:37 nas kernel: docker0: port 1(vethc2a901a) entered disabled state Apr 6 02:00:37 nas kernel: device vethc2a901a left promiscuous mode Apr 6 02:00:37 nas kernel: docker0: port 1(vethc2a901a) entered disabled state Apr 6 02:00:37 nas Docker Auto Update: Installing Updates for hdd_scrutiny heimdall wikijs homeassistant Apr 6 02:05:52 nas Docker Auto Update: Restarting heimdall Apr 6 02:05:53 nas kernel: docker0: port 1(vethdf9918b) entered blocking state Apr 6 02:05:53 nas kernel: docker0: port 1(vethdf9918b) entered disabled state Apr 6 02:05:53 nas kernel: device vethdf9918b entered promiscuous mode Apr 6 02:05:57 nas kernel: eth0: renamed from veth7dd08c1 Apr 6 02:05:57 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethdf9918b: link becomes ready Apr 6 02:05:57 nas kernel: docker0: port 1(vethdf9918b) entered blocking state Apr 6 02:05:57 nas kernel: docker0: port 1(vethdf9918b) entered forwarding state Apr 6 02:05:58 nas Docker Auto Update: Restarting wikijs Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered blocking state Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered disabled state Apr 6 02:05:58 nas kernel: device veth36d8670 entered promiscuous mode Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered blocking state Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered forwarding state Apr 6 02:05:58 nas kernel: docker0: port 8(veth36d8670) entered disabled state Apr 6 02:06:05 nas kernel: eth0: renamed from veth818c2f7 Apr 6 02:06:05 nas kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth36d8670: link becomes ready Apr 6 02:06:05 nas kernel: docker0: port 8(veth36d8670) entered blocking state Apr 6 02:06:05 nas kernel: docker0: port 8(veth36d8670) entered forwarding state Apr 6 02:06:08 nas Docker Auto Update: Community Applications Docker Autoupdate finished i assume this is normal since the containers are restarting? is there a way to suppress these log messages?
  5. Running this command - find /mnt/user -type d -name ".AppleDouble" -exec rm -rf {} \; I get tons of errors like this - EDIT: doh....it's because i had it in my veto config.
  6. I'm looking at the 2nd option (Clear Drive Then Remove Drive Method) in this guide - https://docs.unraid.net/legacy/FAQ/shrink-array/#the-clear-drive-then-remove-drive-method I've already used unbalance to empty it out. My question is this note - Does this mean if I format the drive, I can skip the lengthy clearing step (step and proceed straight to step 9? The note mentions that parity drive is 'updated accordingly' but im not sure what that means in this context. Is it an instant operation if formatting as opposed to clearing? or since the drive currently has 0 usage, can i just remove it + reconfig the array and have the parity remain valid? or does a non-empty disk not mean the same thing as all 0's and therefore parity must be rebuilt?
  7. Ok so I setup two trial arrays and did a small test of the transfer process with some spare drives. Reconfiguration seems to work quite well, creates the new shares. My last question is how does unraid handle a disk from another unraid system, which has the same shares and conflicting files? In my case, the only conflict is the 'system' share, which im not familiar with (looks like docker/vms). Will it cause problems if I merge the two arrays, which include the two different system shares?
  8. Questions on how exactly unraid works with existing drives and how the the new config option works. Let's say I pull a drive from array 1 and insert it into array 2. Now there's two things that can be done - Before starting array 2, I add the disk via the dropdown menu and click start array My understanding is that unraid will not simply allow you to add it array 1, it will format it, resulting in data loss of disk. Is this correct? If the above is correct, the proper procedure here is to create a new config, preserve existing assignments, rebuild parity. This will not result in data loss from the new disk? How does unraid reconcile different share names.? If the original disk from array 1 has a different root level share? When creating the new config, will it just add a new share with the matching name?
  9. The drives in the 170TB array are not to be included in the final array, sorry I should have emphasized that better. The new array will contain a smaller amount of drives with larger capacity. So I have to copy all of that data over and then entirely get rid of array 1.
  10. Asking for feedback for merging two large Unraid Arrays into one. Array # 1 - 176tb, made of 8tb drives. 2 parity. 1 tb SSD cache No docker/no VM's This is primary an archival array. Data is written to it once and rarely ever used. Array # 2 - 36TB, made of 18tb drives. 2 parity. Cache pool of 1 SSD and 1 NVME, mirrored. Running Docker containers and VM's. This is my main array and the one I want to be set as the primary new array. I am going to add a bunch of 18tb drives and move over contents from #1 to a new share on this array. The docker containers on this array are used quite a bit so I would prefer to minimize downtime to the array. The massive amount of volume in array 1 is the main factor here. Approach 1 - Migrate direct from array 1 to array 2 Add the new drives to array two Clear them Move the data from array #1 What do I do with parity? Leave my array with no redundancy? Or should I leave parity on since I'd have to do a rebuild at the end anyways. Is it faster to turn off parity entirely and then rebuild it rather than leave it on the whole time? Also a con of this approach is degraded performance. I don't really read/write intensive ops to Array #2, as long as Plex will still function, I think I should be ok? Approach 2 - Setup a new (temporary) array Setup a brand new array Move over data from Array #1 to these disks Import these disks to array #2 Rebuild parity Downside of this is I need new hardware to setup this temporary array but I think I can manage. Also, how would "import" a disk from one array to another? Do I just stop the array and it will let me add the disk? Also note that both array 1 and 2 are encrypted XFS shares, but have the same password. Also, presumably the share names need to be identical from the temporary array and the main array for folder structures to work? Can I use a trial version of unraid for this? Other questions - What's the best way to actually move the massive amount of data from array #1? I'm not in a huge hurry, I know it will take time, weeks possibly. Doing it disk by disk is not practical since there's 22 drives or so and I also would prefer to have parity in place since im doing full reads on a lot of older drives. This leaves network transfer as the only viable option. Software wise, how to handle this? Just load up Krusader via docker and send 170tb over SMB? How gracefully would it handle errors? Is there a better option? I have multiple NIC's, to avoid bottlenecking traffic, should I dedicated a private network between the machines? I have a 2.5GBE network. Other considerations: Both arrays are xfs-encrypted however have the same password. I would like to use the license from the #1 array, so I need to swap it with the license on array #2 when it's all done. Is this doable?
  11. Let's say I have two separate unraid instances. Each server has encrypted disks but both servers have the same password. Can I move a disk from one to the other and reset the array configuration and have it pick up the new disk without having to reformat?
  12. i just got this on 6.12.6. thanks, this did the trick. it's kind of sad how after all this time, unraid still cannot reliably shut an array down.
  13. I have a large array, 22 drives totaling 164TB with 2 parity and a 1TB cache. I'm using an HBA card with 4 ports as well as a maxed out SAS expander. Software wise, this array is used for archiving, very little activity. I write data to it once and it's barely ever read back or used. No VM's or docker containers. I've been contemplating swapping out my aging mini-itx motherboard for an ODROID-H3+. https://www.hardkernel.com/shop/odroid-h3-plus/ It's tiny, low energy, has 2.5GB ethernet. Now, how do I get my drives onto it? Well, it has: A m.2 -> pciex adapter like this: https://ameridroid.com/products/m-2-to-pcie-adapter-straight?variant=32064105578530&currency=USD&utm_medium=product_sync&utm_source=google&utm_content=sag_organic&utm_campaign=sag_organic&srsltid=AfmBOoqFL-28eJ3G9GuP385RA_gXZgmc9IJ0DHV_x8g7WRm4sNpJEMcR3wE The SAS expander doesn't need to be connected to the motherboard, it only needs power + connection to SAS card. I think the main question im left with is will this setup work with so many drives and how much of a performance hit are we talking? Given im either reading or writing to the drive only one at a time and at a very low rate, do you think this is doable? Probably the only problem I can think of is a parity check which will probably take a lot longer but I can live with that.
  14. i don't see this button, has it been removed since the post was made?
  15. brand new array running 6.11.5. just tried to start it up for the first time. when a client tries to connect (tried SMB on both a macbook and windows machine), it fails and i see the following error in the unraid logs: Jan 7 17:38:25 nas2 smbd[13920]: [2023/01/07 20:38:25.224219, 0] ../../source3/smbd/msdfs.c:170(parse_dfs_path) Jan 7 17:38:25 nas2 smbd[13920]: parse_dfs_path: can't parse hostname from path nas2.lan.local Jan 7 17:38:25 nas2 smbd[13920]: [2023/01/07 20:38:25.224252, 0] ../../source3/smbd/msdfs.c:180(parse_dfs_path) Jan 7 17:38:25 nas2 smbd[13920]: parse_dfs_path: trying to convert nas2.lan.local to a local path it almost sounds like DNS but there's a static entry for both the ip and hostname in my router and unraid seems to resolve it properly: there doesn't seem to be much in the internet about this other than the C header file itself but i can't figure out what it's doing
  16. Hi I have a folder that seems to have a last modified date out in 2024 and it's causing problems in Plex, making it constantly appear as the 'latest' show. The command for this seems to be straightforward touch but the date isn't changing. These are the commands i've tried: The commands execute but the last modified date on the folder remains 2024.
  17. great, thanks! my first drive failure and first rebuild so hopefully it all goes well. unfortunately, most of the drives in my array are around the same age so hopefully something else doesn't fail.
  18. it didn't have a X when i ran the check. i just ran another check in emulated mode and it shows no errors (attached log to end of post). but i just want to make sure i did the right thing. the drive shows 'not installed' with an X but i still clicked into it and was able to run the check with -nv. was that the correct way to run against an emulated drive? and yes i do have a spare disk available. im thinking do the rebuild and if that fails, i can put the original into a file recovery program on another computer. Phase 1 - find and verify superblock... - block cache size set to 1473600 entries Phase 2 - using internal log - zero log... zero_log: head block 149292 tail block 149292 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 - agno = 4 - agno = 5 - agno = 6 - agno = 7 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Sun Aug 7 09:19:20 2022 Phase Start End Duration Phase 1: 08/07 09:19:14 08/07 09:19:15 1 second Phase 2: 08/07 09:19:15 08/07 09:19:15 Phase 3: 08/07 09:19:15 08/07 09:19:20 5 seconds Phase 4: 08/07 09:19:20 08/07 09:19:20 Phase 5: Skipped Phase 6: 08/07 09:19:20 08/07 09:19:20 Phase 7: 08/07 09:19:20 08/07 09:19:20 Total run time: 6 seconds
  19. Hi, im running unraid 6.10.3. Got the dreaded file system missing/umountable error on a drive along with IO/CRC errors in the log. Filesystem is XFS with encryption enabled. I ran a check with -nv and saw quite a bit of corruption. I've slimmed down the logs but basically the corrupted data falls into 3 categories: entry "FILE1" at block 0 offset 96 in directory inode 276415028 references non-existent inode 2147483800 would clear inode number in entry at offset 96 These two are similar but one references a file and one a folder, so im curious what the consequences are for both. entry "FILE2" in directory inode 276415028 points to non-existent inode 2147483800, would junk entry entry "FOLDER1" in directory inode 616472519 points to non-existent inode 2750238235, would junk entry bad hash table for directory inode 616472519 (no data entry): would rebuild would rebuild directory inode 616472519 1. What does 'clear inode in entry' indicate if I was to run an actual repair? Same question for 'junk entry.' Would they be discarded or end up in lost and found? 2. I removed the device from the array and spot checked the files listed in the log based on the parity drive. They all seem to work fine. Should I just swap the disc and rebuild opposed to running a XFS_REPAIR? There's nothing too critical in there but if it would save the hassle of lost and found fragmentation/data loss, im all up for it. But I also don't know if im just misunderstanding FS corruption and the parity version is also going to have the same issues on a rebuilt drive.
  20. They are several very large files. I've attached diags to the original post.
  21. I have 38GB in my cache. According to my array stats, data is being written at about 31MB/s. It varies but that's roughly the average. According to this calculator, it should take half an hour or so to move this much data at the rate above. After 2 hours, it's only moved 4 gb. What's the point of a cache drive if mover is this slow? nas-diagnostics-20201227-0859.zip
  22. For some reason it showed back up as normal after a few reboots. Only saw the option once and all my data is back.