Vaggeto

Members
  • Posts

    43
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Vaggeto's Achievements

Rookie

Rookie (2/14)

1

Reputation

1

Community Answers

  1. Hi all, I am converting some data on the array to datasets and after transferring the data (2TB) I got a "share deleted" dialog when renaming the share after creating it as new, (and checking with ZFS master that the new share was a dataset). Some more detail on the process: I put a new disk in and formatted to ZFS. I have an old XFS share named "unRAID_Backups". I'm renamed that existing XFS share on Disk1 to "unRAID_Backups_old". I then created a new share called "Backups" which prompted a "Backups" dataset automatically. I then copied all of the data from the old share to the new share in Krusader. After the data was transferred into the new dataset "Backups" decided I wanted to keep the old name so I renamed the new "Backups" share to "unRAID_Backups" and it gave me a pop-up message saying "share ...unRAID_Backups... deleted" Now when I look in my shares, Backups is still there (it wasn't renamed). But when I look at the dataset for this disk it was renamed to "unRAID_Backups" but now is only taking a small portion of space .6TB vs 2TB. Also, when I check with the SIO script to check if folders are datasets or folders, it shows: **Datasets in disk2 are** unRAID_Backups **Folders in disk2 are** Backups The problem is, in Krusader or file explorer, under Disk 2 there is no "unRAID_Backups folder, only "Backups". So it appears to have renamed the ZFS dataset, but not updated it's location and now some of the data is just missing? I see that the dataset itself is still pointed to the "Backups" folder when I run ZFS List. Any thoughts or ideas on issues around renaming shares (that are also datasets)?
  2. I just had a similar issue. I am converting some data on the array to datasets and got a "share deleted" dialog when renaming. I put a new disk in and formatted to ZFS. I have an old XFS share named "unRAID_Backups". I'm renamed the existing XFS share on Disk1 to "unRAID_Backups_old". I then created a new share called "Backups" which prompted a "Backups" dataset automatically. I then copied all of the data from the old share to the new share in Krusader. I decided I wanted to keep the old name so I renamed the new "Backups" share to "unRAID_Backups" and it gave me a pop-up message saying "...unRAID_Backups... deleted" When I look in my shares, Backups is still there (it wasn't renamed). But when I look at the dataset it was renamed to "unRAID_Backups" but now is only taking a small portion of space .6TB vs 2TB. Also, when I check with the SIO script to check if folders are datasets or folders, it shows: **Datasets in disk2 are** unRAID_Backups **Folders in disk2 are** Backups The problem is, in Krusader or file explorer, under Disk 2 there is no "unRAID_Backups folder, only "Backups". So it appears to have renamed the ZFS dataset, but not updated it's location and now some of the data is just missing? I'm lost and not sure if this symbolic link issues is also present.
  3. Alright so hopefully my last update, I unmounted the dataset ("zfs unmount /mnt/documents_ha/Documents") and oddly even though appeared to completely successfully with no errors, it still shows in ZFS List like it did before i ran it, but now the actual data is all visible again! Really not sure what to think now though haha. But the Dataset must have been sitting on top of it. Now to find out if that dataset still exists. Alright so the end solution ended up being, remove the dataset by unmounting it, rename my folder to "temp" and then recreate the dataset and move my data over to that folder which brought the disk space used calculations in alignment. Oddly the only other thing I needed to do was stop sharing the share, and then put it back to private to get it to show in Windows SMB again. Otherwise it was not visible or accessible via Windows.
  4. To add a bit of new information, I did run "df -h" and it shows Filesystem Size Used Avail Use% Mounted on documents_ha 3.4T 1.6T 1.9T 46% /mnt/documents_ha But it's a 4.1TB (4TB formatted NVME). So overall I'm seeing: unRaid GUI shows 1.92TB used, 2.04TB free. unRaid File Explorer (calculate) shows 1.68TB used (in 3454 folders and 54,413 files) Krusader shows: 1.9 TiB free out of 3.4 TiB. (1.5 TiB 1,675,196,012,297. 54,412 files, 3,349 sub-folders) CLI "df-h" shows 1.6T used and 1.9T available. CLI "du -hs" on this drive shows 1.6T and doesn't include any of the files which disappeared.
  5. Hello! I just upgraded a cache drive I use and copied the data off of the old btrfs drive to an unassigned drive, and onto the new ZFS-encrypted drive. This created no data sets, so I tried to use the ZFS Master plugin to add one. I created it with the same name as one of the folders, and right away all the files disappeared, but I noticed the disk space used on this cache drive didn't go down. I'm not worried about the data itself as I could just re-copy this data over, but I suspect the files are still there somehow? Unraid GUI shows 1.92TB used, unRaid file explorer shows 1.68TB used. ZFS master shows 1.75TB used. The missing data was 302GB. Any ideas on how I can check? Thanks!
  6. Thanks I will look into these recovery options. Considering I saw a few BTRFS errors initially a month ago, but the scrub showed no files with errors. Any thoughts? I was expecting some corrupted files including my docker file and VM file, but it was clear there were no errors, but then only a month later it comes back with unmountable. Maybe 1 of the 2 NVME drives are dying? They are still under warranty.
  7. Hi all, I'm having a bit of an odd issue that I'm hoping you can shed some light on. To quickly summarize, my dual NVME Raid1 cache drives are now both showing "Unmountable: No file system" (screenshot attached) and I can't load dockers or my VMs as a result. I also can't balance or scrub them to look for errors. Taking a step back slightly, recently my VM which I keep on the cache randomly stopped working and my docker service wouldn't load after a system reboot. I looked at the terminal and saw a few btrfs errors (BTRFS error (device loop3): open_ctree failed). After some searching I did a balance and scrub which ideally was going to reveal any corrupted files so I could restore them and move on. Sadly I came back with zero errors after multiple scrubs and the terminal was no longer showing errors. So I rebuilt deleted my docker file and rebuilt my dockers and everything seemed fine outside of the fact that my original Windows VM still wouldn't boot but I was kinda assuming it was corrupted and was still hoping to fix it, so it just sat there waiting. Now a month later, I randomly am seeing this "Unmountable: No file system" as shown in the screenshot. I will attach diagnostics but does anyone have any ideas? Thanks all! Side note, I know I have a dying/disabled hard drive in my array. It was already completely empty when it started dying and I have dual parity, so I'm not too worried about it. I have 2 new large parity drives I'm going to switch to, but I wanted to clear up this cache drive issue first before messing with the array and rebuilding my parity. unraid-diagnostics-20221212-1148.zip
  8. My auto-start fixed itself with 6.10.2 with no changes to the scripting.
  9. Has anyone had this process break once updating to 6.10 or other newer versions? I went from like 6.7 or 6.8 to 6.10. It just doesn't work now but did consistently . I'm not seeing any message in the console, but I could just be missing it.
  10. Hello, this is very helpful! Are you able to provide any recommendations on editing the acpi_handler file on each boot? Is it just a script that runs on startup to modify that line of code? Or replace the entire acpi_handler file each time? Any example of how this can/should be done? I tried googling but am not coming up with anything that's helpful for someone without a ton of experience.
  11. Just installed this and it works great, thank you! A bit of feedback: It's not very intuitive on how to trigger the WOL once you save a new item. Also, the defaults values of adding a new item are confusing and it's not clear they are editable fields as clicking them does nothing initially. Defaulting to one of the fields (Mac ideally) in edit mode when clicking "new" would make that more obvious. Also I'm not fully sure why you ask for the IP address along with mac address. If it has an IP address, doesn't that mean it's already awake or maybe there is more functionality I don't understand about WOL to an awake device.
  12. I just recently setup a system on the 3970x with 3 VMs (2 with pass-through graphics) and to be honest although I spent a couple days troubleshooting and figuring out how to pass things through, it's actually pretty easy to do and would only take me a couple hours if I had to start from scratch. I knew very little about VMs prior which also caused research to take me some time. unRAID 6.9 fixed one specific issue I was having and I wouldn't recommend trying without that although in many setups it likely would work but just with more manual tweaking. The biggest issue you'll have with 3 VMs is PCIe slots and their spacing with the motherboards but PCIe extender cables would easily solve that and I can't think of any other issues you might have. I used the Gigabyte TRX40 Aorus Master and would recommend it. Easy to pass at least 2 different sets of USBs to VMS, easy to pass both separate on-board audio outputs. Easy to pass graphics. Easy to pass NVME ports.
  13. Thanks for the tips and info, this guide has been very helpful. Are there any updates with the newer unRAID and the possible use with Infiniband? I know the card still doesn't show in network settings until set to Ethernet mode. I'd really like to try the 40GbE out instead of 10GbE. For anyone who doesn't have easy access to a Windows machine for the easy updates/configuration changes, you can pass the card through to a VM and do it from there without using a separate computer. You won't see it back in unRAID network settings until you "unstub" the card though, removing it from the VM. I also rebooted my entire unRAID server rather than just the VM when Windows said that was needed for the card changes to take effect.
  14. Hello! I'm having some issues getting this working. I've got all dockers installed and running, but it doesn't appear that the telegraf database is being created when I start telegraf because Grafana isn't finding it. My ports for influxDB are 8083 and 8086, which I've left at defaults and I've tried pointing Telegraf.conf at both of those and Grafana at both of those. Any ideas? Here is a telegraf.conf: https://pastebin.com/N0bGbjVe
  15. Yes, but I'm not 100% sure I need them. I have a large Movie share which goes across 2-3 disks. When browsing movies, I hate to have to wait for another drive to spin up when testing out different movies so I put them all on the same spin-up group. If unRAID would do that automatically for shares based on "included" disks, that would remove the need for me to do this.