danielocdh

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by danielocdh

  1. the new path for the image seems to be /mnt/user/system/libvirt/libvirt.img Thanks
  2. I ended up using an ubuntu docker image with wine(to run clrmamepro), I had to map every disk to one folder in the docker (/mnt/disk1/emu, /mnt/disk2/emu, ... instead of single /mnt/user/emu) The downsides of this method I have seen so far are: - I'm getting warnings about some sets: "Set exists in various rompaths", probably could be fixed by manually moving files to the same disk - created/moved files are owned by root with permissions 644, I'm using chmod/chown on unraid terminal to fix this for now As for the speed, scanning seems normal(for multiple rom paths) but I'm guessing the writing/fixing could be slower because of parity I tried to use the same docker but with /mnt/user/emu mapped, it seemed like it didn't freeze but the speed was slower. I used the default options for clrmamepro except that I unchecked: Scanner->Advanced->Deeper check for fixable missing files, it might work with that option checked but it takes longer and I didn't need it.
  3. So a while ago I moved my rom collections to unraid but now I'm trying to scan them to update them but clrmamepro just gets stuck on most files over and over, there is no network transfer or disk reading in unraid besides 100KiB or less every 5-15 seconds. the problem only comes up when scanning the files on unraid from any other computer. I tested scanning the files on computer1 from an unraid vm and it works, I tried scanning the files on computer1 from computer2 and it works, the speed isn't great but it works(1-10mb/s) and it finishes the scans. I already tried all optimizations that I could find for samba, I don't think any made a difference. I also tried uninstalling plugins. Is there any fix for this or something else I can try?
  4. The issue is fixed since unraid Version 6.10.0 2022-05-07, so you only need to use the script on older versions. So the fix I'm using now (tested on Unraid 6.8.3, 6.9.1, 6.9.2) is to edit /etc/rc.d/rc.libvirt ->stop_running_machines to correctly handle paused VMs on shutdown/reboot or array stop. Basically I just added commands so stop_running_machines tries to resume the paused VMs, after that they will be hibernated/shutdown as usual. rc.libvirt is not persistent so I created a php script to edit it as needed, I'm running the script with CA User Scripts with the scheduling set as "At First Array Start Only" (need to run the script once after creating it). php script file attached script
  5. there are only 2 "upon host shutdown" settings, which one would would I have to set to avoid an incorrect shutdown? I guess I didn't fully explained myself, I tried with "upon host shutdown" on both shutdown or hibernate, the VM still gets powered off incorrectly. virtio tools are installed, manual hibernate and stop from unraid work correctly(windows is hibernated/shutdown correctly)
  6. I have a windows VM on my ssd cache drive, I want to have it paused most of the time, but when it is paused and I shutdown/reboot unraid the windows install inside the VM gets powered incorrectly(instead of hibernated/shutdown correctly). If I hibernate the VM manually it usually takes around 10 seconds for the ssd to stop showing reads/writes, I increased the "VM shutdown time-out" to 90 but problem persists. The VM doesn't have any extra hardware attached to it. Is this normal unraid behavior? any way to workaround it? thanks Edit: Unraid(6.8.3) isn't considering/handling correctly paused VMs (/etc/rc.d/rc.libvirt -> stop_running_machines) If you have a paused VM you'll run into forced VMs shutdown when shutting down/rebooting Unraid and also issues when trying to stop Unraid array I made a solution for this (see below) I tested the solution on versions: 6.8.3, 6.9.1, 6.9.2 Update: It seems the issue is a bit worst on 6.9.1 as it seems to force a parity check after an unclean shutdown caused by the bug. Luckily the solution still works for 6.9.1 Last Update: The issue seems fixed and it is marked as fixed on the changelog since unraid Version 6.10.0 2022-05-07 You can still use the solution on older unraid versions
  7. Are folder exclusions recursive? So for example if I exclude folder _abc, will that exclude /mnt/user/x/_abc/some_file.zip ? If not, how can I achieve this, thanks.
  8. The machine is on a configured UPS, the only time the power went out(while I was sleeping) there was no automatic parity check after I turned it on and I'm sure there have been at least 2 manual parity checks without errors after that(before the one with errors). I also don't remember ever having an automatic parity check. It's really weird (in my mind) to not be able to pin point the error(exact file and exact bytes in the file), assuming the parity drive has the same chance of having wrong bytes for whatever reason, seems senseless to not be able to know which file(s)/bytes might be damaged. Thanks for the answers, what I will do for now(when I have enough time) is test openmediavault+snapraid on a VM, I'll be trying to find out if it solves the issue(of telling where the possible damage is) and if I can use it without having to rewrite my drives (besides parity) Edit: it seems that snapraid (in openmediavault) will keep hashes and parity data but they won't get autoupdated like it happens on unraid(with parity) when you edit/add/remove a file (you have to manually sync it), on error it will show the path of the file and the drive symlink/path. Most likely it is easier/better to just install something like Dynamix File Integrity on unraid.
  9. Unraid Parity check: 26-08-2020 17:50 Notice [HD] - Parity check finished (0 errors) Duration: 4 hours, 4 minutes, 58 seconds. Average speed: 136.1 MB/s I didn't reboot, just took a while to start another check. I started one check and cancelled fast because I wasn't sure if I unchecked the corrections checkbox, then started a check again and let it run fully, I think I didn't remove or added any files between the failed and the correct check. How do I know what caused those original 20 errors, and what files(if it wasn't parity mistake) were affected? Thanks
  10. I manually ran a non corrective parity check and I got this: Unraid Parity check: 25-08-2020 21:49 Notice [HD] - Parity check finished (20 errors) Duration: 3 hours, 26 minutes, 33 seconds. Average speed: 161.4 MB/s All my drives are green and there wasn't any unclean shutdown. Previous parity check was 35~ days ago Uptime was 13 days and 15~ hours after the parity check Is there a way to know which files are affected? I checked the SMART logs on all my drives and they all are empty except for the parity drive which has this, shows errors from 25 days uptime(it's what I understand, not 100% sure) but my uptime is only 13 days, so I'm not sure when this happened: ATA Error Count: 3 CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 3 occurred at disk power-on lifetime: 611 hours (25 days + 11 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 01 00 00 00 a0 Error: ABRT Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- b0 d6 01 e0 4f c2 a0 00 00:08:32.180 SMART WRITE LOG b0 d6 01 e0 4f c2 a0 00 00:08:32.067 SMART WRITE LOG b0 d6 01 e0 4f c2 a0 00 00:08:31.895 SMART WRITE LOG ec 00 00 00 00 00 a0 00 00:02:48.494 IDENTIFY DEVICE b0 d8 01 01 4f c2 a0 00 00:02:48.409 SMART ENABLE OPERATIONS Error 2 occurred at disk power-on lifetime: 611 hours (25 days + 11 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 01 00 00 00 a0 Error: ABRT Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- b0 d6 01 e0 4f c2 a0 00 00:08:32.067 SMART WRITE LOG b0 d6 01 e0 4f c2 a0 00 00:08:31.895 SMART WRITE LOG ec 00 00 00 00 00 a0 00 00:02:48.494 IDENTIFY DEVICE b0 d8 01 01 4f c2 a0 00 00:02:48.409 SMART ENABLE OPERATIONS 60 10 98 00 00 00 40 00 00:02:48.217 READ FPDMA QUEUED Error 1 occurred at disk power-on lifetime: 611 hours (25 days + 11 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 01 00 00 00 a0 Error: ABRT Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- b0 d6 01 e0 4f c2 a0 00 00:08:31.895 SMART WRITE LOG ec 00 00 00 00 00 a0 00 00:02:48.494 IDENTIFY DEVICE b0 d8 01 01 4f c2 a0 00 00:02:48.409 SMART ENABLE OPERATIONS 60 10 98 00 00 00 40 00 00:02:48.217 READ FPDMA QUEUED ea 00 00 00 00 00 a0 00 00:01:06.438 FLUSH CACHE EXT I was going to post my diagnostics zip but it seems that even that I chose to Anonymize it, some information is still there so I'll have to check it, please let me know if you need a specific file from it.
  11. So I created my unraid array recently and started using it (from my windows pc on a gigabit LAN) and I'm wondering some things about fragmentation: I found that filling my array using ftp(one transfer/file at a time) will be way more convenient for me (queue, auto overwrite, auto retry, transfers/writes look faster), should I be concerned about fragmentation with ftp? should I be using smb instead to avoid fragmentation? I'll be downloading torrents directly to the array via an smb share, sometimes my network speed is slow so it might take a few to several hours to download a big torrent, my torrent client has the option to "allocate and zero new files on creation", should I be using that option to prevent fragmentation? Should I be worried at all about preventing fragmentation in this case? Related to the previous one, my torrent client also has the option to use sparse files instead of "allocate and zero new files on creation", would this make any difference when saving to an smb share? Edit: I still don't know how relevant fragmentation might be for xfs but I did some quick testing with xfs_bmap, (array has new almost empty disks, the shares use "most-free" and I was using reconstruct write) and it seems that: SMB copy keeps the files in few pieces, 1-2 for small files and split bigger files around every 3-4GiB FTP copy (without writing something else at the same time) is very similar to SMB Torrent without preallocation will use around a piece per 1-5MiB, similar results with sparse files, sparse files seems to be working but not making a lot of difference for the amount of pieces Torrent with preallocation is similar to SMB or maybe even keeps the files in less pieces
  12. Prevent smb users from deleting .Recycle.Bin while allowing the plugin to still work(I assume I'll have to keep testing but seems to work so far). I was wondering/looking for confirmation about the plugin or unraid changing the permissions I set manually. I did some more testing with unraid changing the share security setting, export setting and users with access and it seems my manually set permissions weren't changed
  13. I have been testing a bit and it seems that setting the .Recycle.Bin folder owner/group to root and permissions to 0700 blocks smb users from deleting/accessing it. This works great for me but I'm thinking unraid or even the plugin might change it at some point. Will unraid/the plugin respect the permissions I set for .Recycle.Bin?
  14. At least add a warning/note when rebooting/shuting down. I just assumed I could continue from 90% after a shutdown :/