Posts posted by JonathanM
Memory is fuzzy, but I think that file is created by one of the flash backup packages inside the backup archive, so the only reason it would exist on the USB is if you extracted a backup to the drive at some point. I don't think it's normally on the USB stick, and it's not updated on the drive.
If you are passing through a physical GPU it's assumed you will have a monitor connected to that GPU to see the output. Some cards even require some sort of monitor to be detected to even work at all, in those cases if you are using some flavor of remote desktop software to access it you still need a monitor or a monitor emulation (dummy) plug.
10 hours ago, ConnerVT said:
Is the unknown "usable enough" something that one would wish to inject into the middle of a disaster recovery?
Preferably not, but it's at least something to work with vs. complete loss. "Sorry, we tried everything we could" is preferable to "You have no backup archives at all, so you are hosed"
6 hours ago, orlando500 said:
I buy the "best" usb sticks but they tend to last about a year for some reason. So a better solution is on my wish list.
What is your definition of "best"?
In the context of Unraid, best typically means USB 2.0, physically large, all metal, low idle power draw, brand name sticks.
Very few drives that meet that criteria are still for sale unfortunately.
4 hours ago, ConnerVT said:
I honestly can't understand why people do this. Which is worse? Not having a backup or having a backup file which is invalid?
Sometimes the partially valid backup contains all that is really needed to recover, the parts that change rapidly may not be "valuable", in the sense that restoring an older copy of some files in the archive may not be a breaking issue, vs not having any data at all.
The ability to keep a backup that technically isn't complete, but is complete enough, is better than nothing. Chances are, even if the backup doesn't verify, it's still usable enough for disaster recovery, especially if you keep multiple dates of backups.
Some (many?) containers have the option to keep internal database backups, and since those files are pretty much guaranteed to be stable even when the container is still running and changing the active database, the "invalid" backup still contains a valid backup made by the app itself that can be used.
Making backups of running containers is complicated, it's not a black and white issue like your statement seems to imply.
5 hours ago, Stephenph said:
The parity check is scheduled for every week
Why so often? Default is once a month, typically that's enough.
Parity checks are only needed to verify that parity is being kept up to date, and they are useful to ensure that seldom used disks are still completely error free so if a drive fails it can be successfully rebuilt. Parity is kept up to date in realtime, checks are only needed to verify the realtime process is still working as it should.
8 hours ago, Revan335 said:
Why changed/set this in the Past to vdisk?
In the past, the docker engine required a certain filesystem with specific options. An image was a quick way to make sure the correct options were used, without requiring an entire drive to be formatted that way.
I know you are kicking yourself hard enough already, but I do need to point out that using disk encryption can greatly complicate file system corruption issues because there is another layer that has to be perfect. I would NEVER recommend encrypting drives where you aren't keeping current separate backups, it's just too risky. Verify your backup strategy works by restoring random files and comparing them before you start using encryption. Honestly, I wouldn't recommend encryption unless you have a well laid out argument FOR encrypting.
On 3/18/2023 at 6:42 PM, wacko37 said:
Still struggling to understand why this is not already a feature? How is everyone dealing with backups of their VM's without this feature?
Client / server based backups.
If you install a proper backup software inside the VM, it can do valid backups without taking down the VM. I personally use UrBackup, there is a container for it in the app section of Unraid, but you can just as easily use the built in windows backup if you are using microsoft products, or acronis, or any number of proper backup softwares.
Doing it this way instead of trying to do a live image snapshot ensures you will have a valid backup instead of a crash consistent one. If you restore a snapshot that was taken of a running VM, you risk the snapshot not containing a valid backup, because there is no way for the VM client to know that it needs to commit changes from RAM to disk.
Snapshots of VM's are only a good backup if you are willing to properly shut down the VM before taking the snapshot, or have some other feature in the VM to allow a live snapshot to work 100% of the time. Much easier to not reinvent the wheel and use good backup software in the VM.
On 3/18/2023 at 8:40 AM, Hellomynameisleo said:
I want to do this so it increase the I/O performance when accessing multiple files it will spin up and use multiple HDD instead of just a single one taking all the load.
You do realize this will still limit your writes, since every write is updating the parity disk(s). Have you looked into turbo write?
Just now, eliminatrix2 said:
Should only be a couple of hours hopefully
Depends on size of parity disk and speed of the rest of the drives / HBA involved.
If it's truly urgent to get your main server back on line you probably need to just cancel that and wait until support gets back to you on the plus license. A 16TB parity drive build could take a day or more.
Don't hesitate to speak up if you are having issues, we're generally game to troubleshoot right along with you!
@eliminatrix2,BTW, I'm not an employee, just a very long time user. Please try to be polite, escalating attitudes isn't going to get things moving faster, it generally makes people less willing to assist.
42 minutes ago, eliminatrix2 said:
REACTIVATE the USB drive it's currently using.
You can't do that. Once a drive has been blacklisted it's done.
What you can do, is transfer the entire content of the config folder WITHOUT the licence key file from one drive to another.
So... first things first. Make a backup of ALL the flash drives, taking notes on what went where.
Take the stick that has the currently valid pro license, and put the entire contents and the config folder from your main server on it minus the license key file, making sure to put the current pro license file back on it.
One of the issues with this style of selling is the lack of concrete delivery dates. They hope they can just stall the shipments until the prices people paid become normal market prices because tech tends to get cheaper as newer higher capacity items come on the market.
I see this as the best case scenario, where you do actually receive the products eventually, but by the time they ship, you could purchase them on the open market for cheaper.
Worst case, they are outright frauds, and intend to close up shop shortly after shipping enough units to satisfy the influencers that were gullible enough to hype them. The kickstarter model has so little accountability, any money you send now before products are available for resale on the open market should be considered gifts to the campaign, and you just hope that they are good people that are honest enough to not skidaddle with millions of free dollars.
This particular campaign has just the right mix of authenticity to bring in huge amounts of cash, I hope they really do have the knowledge and honesty to deliver, but it doesn't smell that way right now.
47 minutes ago, Pstark said:
PSU was working when I pulled it from the old setup. I did the short the pins on the mb connector and it powered up fine.
Can I short the those same pins to power up the PSU while it connected to the board? Will that damage anything?
I wouldn't if it were me. Isolate the PSU, so the only thing connected is the power cord, that way if the PSU is faulty it can't hurt anything else.
9 minutes ago, Indi said:
So the *arr's viewpoint, they are two root folders.
That explains why you are getting a copy / delete.10 minutes ago, Indi said:
I notice now that /downloads for Sonarr is /mnt/cache/downloads while Sab is /mnt/user/downloads, could this be my issue?
Not directly, but that could cause some strange behaviour if the downloads share is set to cache:yes
Before you start changing things around I think you need to get a deeper understanding of user shares and how they interact with the disk paths. hard links cannot be valid across the two types of paths.
1 hour ago, Richamc01 said:
I already have the "default_phone_region" set to US as seen below.
No, you don't.1 hour ago, Richamc01 said:
'defualt_phone_region' => 'US',
You've looked so many times you can't see the issue. Not making fun, I've been there, done that. Compare letter by letter if you need to.
4 hours ago, Aran said:
It looks like it may be legit after all.
Smart money would reserve judgement until there are actually products in the general public, reviewers posting success stories don't count.
11 hours ago, Indi said:
during an import operation from Radarr or Sonarr etc, the cache shows Reads of 450Mbps and Write 450Mbps
Whether a copy / erase or a rename is used by a move action depends on how the application views the file systems. If it sees the two locations as being part of the same file system it will rename, if it thinks there are two different file systems it will copy / delete. From the *arr's viewpoint, do the two paths share a common root? Or is it two root folders, /downloads and /TV?
22 minutes ago, harshl said:
Seems to me that if ZFS is implemented in its entirety, that multiple array pools will also essentially be implemented, no? At least it would be if you chose to use that file system.
Multiple pools have been available for a while now. 6.12 adds the ability to use ZFS for those multiple pools.
It's the single volume per drive parity based array that is being proposed to have multiple instances. That style of array will likely always be restricted to single file systems on each member drive, thus the multi drive reliant features of ZFS will never be able to be used in the traditional Unraid parity array.
a. Try the board separate from the case, laying on the box it came in if you need to account for pcie card overhang? The antistatic bag it was packaged in is good as a base to be sure it doesn't get damaged.
b. Map out all the case standoffs and be sure they don't touch anything on the bottom of the board?
Also, disconnect all the case LED and switch leads, you don't really need them to run the board, briefly shorting the power switch header should be enough to start things. Remove the RAM as well, so it's just the PSU leads the CPU and CPU fan lead connected, and see if you at least get fan spin.
Is there a power present LED on the motherboard? Does it light up when the PSU has mains voltage? The IPMI portion of the board should be active and working even without the CPU if it's like some of the server boards.
Soon™️ 6.12 Series
In the context of Unraid, the parity array can indeed have zfs and btrfs and xfs all as single devices in the array, each drive in the array has an independent filesystem.
Each pool in Unraid only has a single file system type, but you could have a zfs pool, a btrfs pool, and a single device xfs pool.