custom90gt
-
Posts
35 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by custom90gt
-
-
2 minutes ago, JorgeB said:
You can try that with rc3 to confirm, and also check if it still logs the /mnt// error.
Sorry stupid question, which file shows the /mnt// error.
-
3 hours ago, JorgeB said:
Unlikely, and didn't you go back to rc3 and the issue remained?
I did, but I didn't try to restore my backups. It's just weird that none of my backups worked at all.
-
4 hours ago, JorgeB said:
The last diags you posted have libvirt.img mounting, but it's a very recent image, so probably empty, though it still showing that weird /mnt// error, one thing you could try is redoing the flash drive, backup current one first, recreate the flash with a stock Unraid install and restore only the bare minimum, like super.dat and the pools folder for the assignments, the key and the docker user-templates, then boot Unraid and post new diags
The last diags I was trying to load the most recent backup before I upgraded to RC4. I also tried earlier backups to no avail. I don't know where the /mnt// error comes from, my only thought is that it has something to do with the new exclusive access since none of my backups work. Sadly I've got the server back up and running by manually reinstalling everything so I'm not willing to break it down just yet.
Once my backup server is done running some hardware tests in a few days, I'll try upgrading to RC4 and seeing if things break once I select exclusive mode on it. If it does then I can do more testing for you.
-
1 hour ago, JorgeB said:
Not that it helps now but you should always have a current backup of libvirt, for the docker containers you can just recreate.
The problem is I have many current backups and none of them are able to mount, the backup that I tried to use was made 2 days ago. The one on my backup server from a week ago also doesn't mount. Not sure what's going on with it.
-
1 hour ago, JorgeB said:
Try rebooting first, if it still fails to start post new diags.
Sadly it doesn't seem to be able to start after changing it to any of the backup images. I also tried copying the backup to /mnt/user/system/libvirt/libvirt.img and restarting but still no VMs. Still not able to get any of my dockers working as well. I'm reinstalling them but not my preferred thing for sure.
-
5 minutes ago, JorgeB said:
Try using the backup instead to see if the VMs re-appear, assuming it's fairly recent.
Sadly I already did and it didn't work out. I didn't try to just point to it directly though.
*on edit*
I just tried to point directly to the backup folder and then it gives me the error that it cannot start the libvirt service.
-
1 minute ago, JorgeB said:
The command can take some time, but it must always finish, or at least it should, you have to get the cursor back.
you are right, I must have interrupted it or something:
root@Harley:~# find /mnt -name libvirt.img
/mnt/user/system/libvirt/libvirt.img
/mnt/user/system_backup/libvirt/libvirt.img
/mnt/zfs/system_backup/libvirt/libvirt.img
/mnt/cache/system/libvirt/libvirt.img -
2 minutes ago, JorgeB said:
That cannot be the full output, but if the other result if from the disk path try downgrading back to rc3, I don't think it's an rc4 issue but that would confirm.
I went back to RC3 but everything was still broken. Sadly that's the only thing that shows up after pasting in your command and waiting like 5 minutes
-
1 hour ago, JorgeB said:
Also post the output of:
find /mnt -name libvirt.img
That diagnostic also had the "replaced" libvirt.img
Output:
/mnt/user/system/libvirt/libvirt.img
-
2 minutes ago, JonathanM said:
Quick clarification please, for the minority of us that have leveraged "only" and "no" in unconventional ways, i.e. to access all the content of both the array and pool but NOT automatically move things around, manually moving things as needed, does the "additional check" force exclusive access to NO, and continue to work as before showing a fuse mount of array and pool content?
I just want to be sure before I get a nasty surprise where most of my VM's suddenly lose their images. I currently have domains set to cache:only, so new VM's get created on the pool, but if I don't plan to use the VM very often I'll manually move the vdisk to the array, and move it back if I need to.
This may be why I lost all of my dockers and VMs...
-
21 minutes ago, JorgeB said:
Something weird is going on here:
Apr 28 07:08:32 Harley emhttpd: shcmd (37408): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker-xfs.img' /var/lib/docker 20 Apr 28 07:08:32 Harley root: Specified filename /mnt//system/docker/docker-xfs.img does not exist
.
Apr 28 07:06:18 Harley emhttpd: shcmd (29382): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1 Apr 28 07:06:18 Harley root: Specified filename /mnt//system/libvirt/libvirt.img does not exist.
Both images are not being found, but the path is weird /mnt//, and there's no message about new images being created, but I can see at least with libvirt that the image is new because of the btrfs transid, try re-applying the paths for both Docker and VM services and post new diags.
I changed the directory, hit apply, and then changed it back to default before downloading the diagnostics. Not sure where that extra forward slash is coming from.
-
1 minute ago, JorgeB said:
Please post output of
zfs list
NAME USED AVAIL REFER MOUNTPOINT
cache 151G 570G 160K /mnt/cache
cache/appdata 3.20G 570G 3.20G /mnt/cache/appdata
cache/domains 20.7G 570G 20.7G /mnt/cache/domains
cache/downloads 83.0G 570G 83.0G /mnt/cache/downloads
cache/isos 9.87G 570G 9.87G /mnt/cache/isos
cache/system 564K 570G 564K /mnt/cache/system
cache/temp 34.4G 570G 34.4G /mnt/cache/temp
optane 27.4G 79.2G 104K /mnt/optane
optane/plex_appdata 27.3G 79.2G 22.3G /mnt/optane/plex_appdata
zfs 22.6T 29.0T 529K /mnt/zfs
zfs/appdata_backup 3.38G 29.0T 3.38G /mnt/zfs/appdata_backup
zfs/apps 77.6G 29.0T 77.6G /mnt/zfs/apps
zfs/army 2.37G 29.0T 2.37G /mnt/zfs/army
zfs/auto_stuff 11.7G 29.0T 11.7G /mnt/zfs/auto_stuff
zfs/backup_server 205K 29.0T 205K /mnt/zfs/backup_server
zfs/backups 83.4G 29.0T 83.4G /mnt/zfs/backups
zfs/books 11.6G 29.0T 11.6G /mnt/zfs/books
zfs/capture_backup 12.5G 29.0T 12.5G /mnt/zfs/capture_backup
zfs/documents 1.43G 29.0T 1.43G /mnt/zfs/documents
zfs/domains_backup 20.9G 29.0T 20.9G /mnt/zfs/domains_backup
zfs/exercises 23.0G 29.0T 23.0G /mnt/zfs/exercises
zfs/family_backups 432G 29.0T 432G /mnt/zfs/family_backups
zfs/flash_backup 1.28G 29.0T 1.28G /mnt/zfs/flash_backup
zfs/format 27.2G 29.0T 27.2G /mnt/zfs/format
zfs/games 6.80T 29.0T 6.70T /mnt/zfs/games
zfs/holly 7.63G 29.0T 7.63G /mnt/zfs/holly
zfs/home_videos 17.8G 29.0T 17.8G /mnt/zfs/home_videos
zfs/kid_movies 1.01T 29.0T 1.01T /mnt/zfs/kid_movies
zfs/kid_tv_shows 1000G 29.0T 1000G /mnt/zfs/kid_tv_shows
zfs/movies 5.84T 29.0T 5.84T /mnt/zfs/movies
zfs/music 141G 29.0T 141G /mnt/zfs/music
zfs/other_pictures 26.2G 29.0T 26.2G /mnt/zfs/other_pictures
zfs/pictures 269G 29.0T 269G /mnt/zfs/pictures
zfs/plex_backup 50.7G 29.0T 48.9G /mnt/zfs/plex_backup
zfs/portuguese 19.8G 29.0T 19.8G /mnt/zfs/portuguese
zfs/residency 7.74G 29.0T 7.74G /mnt/zfs/residency
zfs/school_stuff 344G 29.0T 344G /mnt/zfs/school_stuff
zfs/system_backup 1.03M 29.0T 1.03M /mnt/zfs/system_backup
zfs/system_files 11.9G 29.0T 11.9G /mnt/zfs/system_files
zfs/tv_shows 6.24T 29.0T 6.24T /mnt/zfs/tv_shows
zfs/westerns 170G 29.0T 170G /mnt/zfs/westerns -
16 minutes ago, bonienl said:
The WRITE operation does not change the primary location, it only changes the settings related to the share allocation.
Well that makes sense, odd that it seems to have updated them from array to cache though. Or maybe that's my imagination.
-
4 hours ago, Zonediver said:
Well, i understand what you mean - so the function itself remains the same, but the "new description" is different (for new unraid users). then... i ignore it 🤣👍
It makes it so the average user can easily understand. We get that you know how to do it with the old method, that's great.
-
4 hours ago, bonienl said:
Please add more information, like screenshots of what you are trying to do.
What are the source settings? What are the destinations? What are the results?
Using the "Write" function to copy settings from source to destination(s) works as expected for me.
Maybe difficult to fully explain, but here is what I did:
1. In one of my share folders (documents) I selected my ZFS pool as my primary storage pool.
2. Hit apply.
3. Utilized the write settings to function to try to copy the settings to other applicable shares (games, movies, etc).
4. Checked the shares that I attempted to apply the settings to and they were set to "cache" instead.
I tried it multiple times however it kept them as cache and not ZFS. I just tried the write settings to function again and it kept the primary storage as ZFS so maybe it was just a fluke? Just a minor annoyance having to manually set the primary storage location on 20+ shares manually.
-
Weird that it worked just fine until the upgrade. Is the array data stored somewhere else? That's the only thing that popped back up, however the array was set to 26 devices or something like that when I only have the one USB drive.
-
So far I'm liking the new way of listing the primary and secondary pools. Also with everything set to exclusive access, I have much less CPU usage. Excited to try some zfs transfers to see if it improves speed. I was getting around 550MB/s transfers to my backup server.
-
Just discovered another issue. If I set a share for my zfs pool and try to write that setup to other shares it puts them on "cache" instead...
-
13 minutes ago, ljm42 said:
Please open a new bug report here:
https://forums.unraid.net/bug-reports/prereleases/
and be sure to include your diagnostics.zip (from Tools -> Diagnostics)
Looks like just re-adding my drives to the pools causes them to show back up with all the datasets. Glad I didn't have to redo everything. I am re-adding permissions to shares and settings there though.
-
Bummer, after going from R3 to R4 my ZFS pools are missing.
- 1
-
6 hours ago, ljm42 said:
To troubleshoot, we'd need to see the syslog after the problem happens. The easiest way to get it is by uploading a diagnostics.zip (from Tools -> Diagnostics or by typing "diagnostics" at an SSH prompt)
Best to start a new thread for your issue here in the prerelease area.
Removing my SLOG and L2ARC drives and then upgrading allows my array to start. Will see if I can re-add them now...
-
Humm after going from RC2 to RC3 my array won't start, just says "mounting disks"
*on edit* I couldn't figure it out. Going back to RC2 fixed whatever it's getting hung up on. Maybe it's my L2ARC and SLOG additions to my ZFS array?
-
Tried to import a previously created raidz2 pool with l2arc and slog devices but it says that the filesystem is unmountable. I'll try removing the l2arc and slog drives and then just importing the raidz2 pool. Maybe after I can re-add them.
1 hour ago, Jclendineng said:Installed on my test server and created a zfs pool, that created several errors in the log and the webui hung up in a sense, none of the disks were able to be assigned, and I could not delete any pools. I restarted and the server will not boot. Will grab logs and post a bug report tomorrow. Thanks for the release!
This also just happened to me. The server boots but the webui doesn't work.
*on edit* was able to get it booting again by removing the config file for the pool that was causing the issue.
- 1
Unraid OS version 6.12.0-rc6 available
-
-
-
-
-
in Prereleases
Posted
Hummm I have the same issue sending files to my backup server via SSH. Will try rolling back to see if that fixes it.