-
Posts
461 -
Joined
-
Last visited
Community Answers
-
apandey's post in Add drives externally was marked as the answer
Do you have a spare PCIe slot? You can also get an HBA with external SAS port which you can connect to a SAS connected SATA enclosure. Might still be cheaper to look for a case transplant as suggested by @Frank1940
-
apandey's post in Drive changed from shd to sdi?!?!?!?! was marked as the answer
Post diagnostics when the drive is showing unassigned. Seems you drive dropped off and hence the drive letters for remaining drives changed. The cause of original drop off is what we need
Drive letters are not important to unraid, it identifies drives by name and serial number. /dev/sdX are linux assigned names and not guaranteed to remain same, so if you want to specifically address drives for any use case (like VM pass through), you should use by-id instead
-
apandey's post in Use up my remaining ram was marked as the answer
If you plan to use zfs, it would be handy to configure some RAM as cache
You can use cache dirs plugin to speed up dir listings (and avoid unnecessary disk access), though I doubt it needs a lot of ram
If you work with any transient data, you can create a ram disk, though this use case is more useful with apps then a basic NAS use case
Linux anyway uses extra ram for IO buffers and caching, so if you happen to load same data repeatedly, spare RAM will help even when you don't see it being used explicitly
In the end, don't sweat it too much. Spare RAM is good for a server, definitely better than running with very small headroom. 16GB is not too large to worry about
-
apandey's post in Interface down ETH1 was marked as the answer
What do you see on windows PC end, it should also see a link down if it's point to point
Have you checked the cable on another connection?
-
apandey's post in Drive keeps spinning up; how to diagnose the cause? was marked as the answer
Seems like that is nextcloud default, it logs under data directory. Odd choice
If you want, you can change the log location in config.php, I believe it is under appconfig/www/nextcloud/config
-
apandey's post in Unable to move/copy files on unraid to my Mac. Now my password is also not working. :( was marked as the answer
not unless it is a public share. You need access to management UI first to see what is the setup
-
apandey's post in Scripting questions was marked as the answer
Anything in /boot/config/go will get executed at startup
-
apandey's post in Problem moving to new server with the same hard drives was marked as the answer
Since you were using raid0 via a raid controller, there would be raid identifying tags and data on the drive which would offset the actual filesystem. This is why the drive isn't seen as an xfs formatted drive. I hope you did not corrupt it further trying to assume it's xfs and repairing it
Unraid expects it's working with a passthrough controller with direct control of disks
Thr safest option is to start with freshly formatted drives in new servers array and copy over the data even if it takes time. You can do it disk by disk slowly growing the target array, or all at once if you have backups to copy from
-
apandey's post in How "should" a network be setup? was marked as the answer
If they are all SFP+ interfaces, you can connect your pfsense + 7 other 10Gig devices to your aggregation switch. All connections are negotiated point to point. So that meets your objective already
-
apandey's post in Cache drive swap from xfs to btrfs was marked as the answer
how is your cache set up w.r.t. the shares?
simplest would be to:
1. stop docker / VM services (not just VMs and containers, but services themselves)
2. set all shares that use cache to cache=“yes” (make note of what they are set as so you can go back to that)
3. run the mover, make sure nothing remains on cache
4. set all shares to cache=“no”
5. again make sure nothing remains on cache, else go back to 3 🙂
6. rebuild cache as btrfs mirror
7. set all shares to original cache preference. If there were any cache=“only”, first set cache=“prefer”, then run mover and finally switch to “only”
-
apandey's post in Why is Disk14 visible mounted in /media/disks was marked as the answer
is your array healthy? I see couple of issues from your diagnostics
disk16 seems to have a corrupted filesystem. Do you see all your array disks mounted? Not directly related to disk14, but I do see it discovered as a new drive and mounted into unassigned devices. Can you post a screenshot of your UI Main tab?
you also seem to be getting PCIe BadTLP errors. perhaps you should add pci=nommconf boot parameter
-
apandey's post in New user, dedicated disk. was marked as the answer
If it was an array with parity protection, you can simply replace disks one by one and let unraid rebuild the data onto new ones using parity (needs biggest disk in array to be parity). But since it's a single disk, best bet is to mount old disk as unassigned and do new config with new disks, then copy over data
Unraid manual linked on top of unraid UI has instructions on all these procedures but feel free to ask here too
-
apandey's post in One disk getting full while others not so much was marked as the answer
What is being written? New files or updates to existing files? If a file is already on a disk, it will remain there irrespective of free space elsewhere
How is the data being written? Tools like rsync will create all dirs first, so things may all end up on first disk
-
apandey's post in Brand new to unRAID and could use some help was marked as the answer
yes, it is most likely a BTRFS RAID1 pool, with 1 disk redundancy. You can confirm by clicking the cache drive settings in Main tab in unraid UI. Under balance status, it would say filesystem type
as for what is better, it depends on your usage. If you prefer 2 separate independent pools without any drive redundancy, thats fine too
simplest is to make appdata as a cache "only" (see share settings) share. or create a separate cache "only" share and point plex container metadata mount point to it
if above terms are new to you, read the respective parts of unraid manual and ask more questions here
-
apandey's post in Help Modifying A CP Command For Script was marked as the answer
cp /mnt/user/"First Location"/"Rick And Morty (2013)"/* /mnt/user/"Second Location"/"Rick And Morty (2013)"/
-
apandey's post in SMART self-test Completed: read failure was marked as the answer
Extended smart test runs completely in drive firmware without any influence from the rest of the system, so if that comes back with errors, the best course of action is to replace the drive. There is not much you can do about already failing hardware.
Since this is parity, you aren't immediately losing data, but you are one step closer to that if another disk fails while you are waiting to replace
-
apandey's post in Unraid boot looping after adding 2nd VM'ed pass through SATA SSD was marked as the answer
what manual path are you setting? /dev/sdX can change across reboots, so if you want it to be consistent, use the /dev/disk/by-id/ path which remains same across boots
-
apandey's post in Log Full after Upgrade was marked as the answer
You have pcie errors spamming the logs. Try adding pcie_aspm=off to boot args in syslinux config
See this thread for details
-
apandey's post in Help with running Smart Checks on more drives than 25 was marked as the answer
try this instead
#!/bin/bash for disk in $(lsblk -I8,65,66 -ndo name); do smartctl --test=short /dev/${disk} done
it lists names of all devices using lsblk, where major device numbers as 8, 65 or 66 (sd*), then loops through them
-
apandey's post in Cache pool mounted read only with RAID showing full but disk showing empty was marked as the answer
If you have managed to repair fs and scrub is no longer giving you errors, no need to do anything. You should probably run one more scrub to be sure
Good idea to do a scheduled scrub. I do once a month, separate time from my parity check)
Balance, you may not need. Depends on your data write pattern. If your utilization stays good, no need to rewrite everything (which is what balance does)
-
apandey's post in VirtIO ISO not mounting in windows 11 was marked as the answer
have you tries setting bus to SATA?
-
apandey's post in The Shares are all showing the Data is Unprotected was marked as the answer
The shares are unprotected because they are using the cache, which is a single drive with no redundancy. Since at any given point in time, some files can be on cache, you can lose them if the cache drive fails
You can click compute on shares page to see which drives are used for each share
-
apandey's post in timemachine backup keeps disconnecting was marked as the answer
In my case, I got distracted with other stuff and parked this issue for a while. It eventually finished first backup and from there on incremental backups have been smooth. So whatever it was, it seems to only affect initial backup. The network interface itself is stable, I can transfer large amounts of non-timemachine data without any interruptions
Anyway, it's mysteriously fixed and is no longer an issue for me, so I'll mark this solved
-
apandey's post in [SOLVED] I've messed up a drive swap. What do I do? was marked as the answer
The simplest would be to
1. Pause any time machine backups
2. Format 6A and ensure the share is available again on array
3. Rsync the data from 6B to array
Another approach could be to put back 6A and rebuild parity to match current disk contents, but this risks the whole array during rebuild, so i suggest you don't attempt this
-
apandey's post in Multiple maintenance items - order of operations/best practice? was marked as the answer
if it was me, I would so it in this order:
1. upgrade unraid version - so that all other operations benefit from running on latest software, and you are not left dealing with any older bugs that may be less supported
2. replace parity with 18TB, rebuild. Don't add a 2nd parity if you don't intend to keep 2 parity drives. Just shutdown, take out 14TB parity and replace it with 18TB, assign it to parity slot and start the array to rebuild parity
3. replace one of the 1.5TB data drives with the 14TB. If you want, you can move data off that drive, but its not really needed. Follow https://wiki.unraid.net/Replacing_a_Data_Drive
4. Move data off the remaining 1.5TB drive and then remove it from the array. Follow https://wiki.unraid.net/Shrink_array