xRadeon

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by xRadeon

  1. Yes, it will re-install after a reboot by using the already downloaded gz file.
  2. When installing the plugin untars the files to this directory "/opt/microsoft/powershell/7/". The pwsh binary file is inside that directory and it's not an "exe" file, it's a native Linux binary file that's named just 'pwsh'. As for docker, I guess what you could do is map "/opt/microsoft/powershell/7/" to some path in the container and then you should be able to call pwsh from that path. I don't know if it would work. Some dockers are very cut down so I'm not sure if pwsh will have access to everything it needs to run properly, but I'm not an expert or anything so maybe it will work just fine, idk. Good luck!
  3. I've created a plugin that installs PowerShell for you if you still need it!
  4. Greetings, I've authored a plugin to install Microsoft PowerShell. The plugin is found here: https://github.com/x-radeon/unraid-powershell Please note I'm not experienced with writing plugins for Unraid, so install at your own risk. I use the plugin on two of my Unraid installs to manage my BTRFS snapshots with no issues, so I know it works. If you know how to write plugins, please help improve it! Steps to install: On your Unraid install go to Plugins > Install Plugin For the URL, paste in https://raw.githubusercontent.com/x-radeon/unraid-powershell/main/PowerShell.plg Click Install To use PowerShell after install, type pwsh at a terminal or call pwsh in a bash script (when using 'User Scripts" for example) like so: #!/bin/bash pwsh /boot/config/plugins/myscripts/script.ps1 -xRadeon
  5. I guess I should update this. I've been working really solid for weeks now. I did one test where I enabled "Access to Custom Networks" and within hours it had locked up with the same output as above. So for me, the solution was to disable access to custom networks in the Docker settings.
  6. It was up for about 8 days then I had some bad luck over the weekend as the power went out for longer than my UPS lasted for. Hasn't crashed since the power going out, so that's good. I'm thinking if it can go about 14 days with out crashing, that's probably a good sign that it's fixed, so I'll update in about a week and half.
  7. I actually had a second panic the other day. I went through the link above and it seems like there isn't a solid conclusion as to what the problem is. It for sure is a networking issue related to docker, but it doesn't seem like it's 100% when you have dockers using the custom br0 net. Someone did post on that thread talking about the "Host access to custom networks" option causing the problem. I've disabled that and I'll see if I get another panic (I've been up for 2 1/2 days now).
  8. Hmm. I'm not sure that it's the br0/nat issue. Most my dockers are bridge or host, none are br0 w/static. It's only panic'd one time so perhaps it's not that big of a deal. If it happens again I'd start to be concerned. Thank you for your reply!
  9. Hello All, I just had my first Unraid Kernel panic. I'm not too familiar with how to decode the panic output, so I'm unsure what caused the panic. Below is a screen shot of as much of the panic info as I can. It crashed around ~0120 for me (I can tell from my Observium data stopping around that time). Not sure if any tasks ran at that time, however, I was finishing up a full BTRFS balance operation (after adding a new disk) to my 'hdd' BTRFS pool. There's a possibility it completed right around the time where the system crashed, so it's potentially a BTRFS issue. I've ran a full memtest and it completed with no errors, I only did one full test though. I've also attached the diagnostics zip if needed. Thanks for anyone who takes the time to look at this! island-diagnostics-20210425-1003.zip
  10. (I apologize if there's already a large thread on this) With 6.9 out, I've started my long planned project to move my array to a "hdd" pool (hdd is just the name I gave it) in Unraid while still also have an nvme pool (cache pool). With having basically no array, the mover is something I can't use since it's tied to just move between the array and a "pool" of your choice. However, it's something I'll need since I want files to drop into the nvme pool by default and move to the hdd pool as the nvme pool fills up. Is there any plans officially or anyone know if a community plug-in is in the works that would allow the The Mover to be used to move files between two pools? I think if officially supported the share settings would look like this: Use mover on share: (No, Yes, Only, Prefer) Select where new files are placed (array or pool): (List of pools + the array; Replacement for the "Select cache pool" option) Select where files are moved to (array or pool): (List of pools + the array minus the selection from above; Not used when set to "Only") Then the script modified to use these values. I don't think it should take much modification, but I have no idea how complex or large the mover script it. Also, modifying the mover script may be something Unraid doesn't want to do since it's pretty mature at this point.
  11. The mover is a Unraid function that moves files from a cache pool to the array or from the array to the cache. It's used mainly in the situation where downloads or file copies are placed on the cache pool by default but then are moved to the array when the cache fills up. Mover operations are defined at the share level. If you go to the shares tab you'll see all your shares (folders at the root of the array or pools/cache drives). If you click into a share, you can configure the "Use cache pool" option. The help text is as follows for the 4 options: Most people will probably use the "Yes" option (for using the cache as a temp landing ground) or "Only" option (for times where you want files to always be on fast storage). I'd suggest for your share that hosts the vm files to use the "Only" option. However, at first use the "Prefer" option. Then shutdown your vms. Use the mover (this will copy the files from the array to the cache pool). Turn your VMs back on and then the share to "Only". Note: Files will never live on both the array and cache pool at that same time (unless you copy the files manually, not suggested). So this isn't a back up solution, all this defines is where you want files to live. If you want a backup solution, you'll need to do something different like a plugin that will do VM backups for you (se below). Or using BTRFS snapshots + btrfs send to a backup server.
  12. I never did figure the solution to this, I kinda figured by SAS controller was dying so I just ended up buying a LSI 9305-16i. I don't have the issue anymore.
  13. Just create a new path like this (replacing host path to the path where you want the data to be stored at): This will overwrite the the real data folder where the user folders are created at, this works even though we're already passing through the root /data folder.
  14. Gotcha, I'll just uninstall the plug-in! Thanks!
  15. Hi, I was wondering if someone could help me troubleshoot an error I'm getting on my two Unraid boxes with the Trim plugin. I'm getting this error on both boxes (the device is different between the two, but the error is the same): DATE kernel: BTRFS warning (device dm-4): failed to trim 1 device(s), last error -121 I have tried to re-install the plugin, but it continues to throw this error. I was wondering if there's a way I can run the trim command manually with more verbose output so I can troubleshoot what error it's running into. I have Intel SSDs in the cache drive formatted as BTRFS.
  16. The board is a mini-itx board so I only got one pci slot. I'll do some digging on that Google search and see if just getting that 9300-8i would solve the problem. There's also a BIOS option that talks about how to split the PCI slot if using a riser card, not sure if I played with that option or not. I'll try it out and see what I find. Thanks for the pointer!
  17. Specs: Unraid: 6.7.2 Motherboard/CPU: Super micro M11SDV-8C-LN4F (AMD Epyc 3251 SoC) RAM: 2x 16Gb ECC (Tested good via Memtest86+ 5.01) BIOS, IPMI: 1.0a, 3.13 HBA: LSI 9220-8i (IT mode FW 20.00.07.00, no BIOS installed I think, I'd have to check if it's still there, I may have erased it and not put it back on) Drives: 5x HGST_HDN721010ALE604 & 4x INTEL_SSDSC2KW512G8 (8x plugged into HBA, 1x plugged into Motherboard) Issue: Hello Everyone, I have a very strange issue with a new motherboard I'm trying to use for my Unraid build and I was hoping I could get some help. The issue I'm running into are drives attached to my LSI HBA are generally not readable or detected (by this I mean they either do not appear at all in the web ui or they are detected but cannot be accessed for some reason in the syslog output and they also will not appear in the web ui). The issue always occurs if after I power on the system and have booted into Unraid, I then reboot the system and after it boots the drives will not be detected. If reboot over and over they still will not be detected. If I shutdown the system and power it back up, they drives are then detected again. Sometimes even if the drives are detected, when I go to start the array it will not start since the drives cannot be accessed. The single drive I have plugged into the motherboard works fine every time, I can always see it in the web ui. I suspect this is a BIOS/board issue but I just want to rule out a driver issue or kernel issue with these Eypc 3000 CPUs in Unraid. Troubleshooting: I have tried testing different settings in the BIOS to no effect on the issue. For example, Legacy vs UEFI boot, Above 4G Access, IOMMU, Virtualization, Precision timing, consistent pci device naming, etc. I've tried different combinations of BIOS settings but nothing seems to have any impact on the issue at all. I know the HBA card is good since I have a Super micro A1SRi-2758F board that works flawlessly, I can reboot it as many times as I wish and the drives are always detected. When the drives are not detected, I generally see this in the syslog: Aug 24 22:05:01 Island kernel: mpt2sas_cm0: _base_wait_for_doorbell_not_used: failed due to timeout count(5000), doorbell_reg(ffffffff)! Aug 24 22:05:01 Island kernel: mpt2sas_cm0: Allocated physical memory: size(1687 kB) Aug 24 22:05:01 Island kernel: mpt2sas_cm0: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432) Aug 24 22:05:01 Island kernel: mpt2sas_cm0: Scatter Gather Elements per IO(128) Aug 24 22:05:01 Island kernel: mpt2sas_cm0: doorbell is in use (line=5195) Aug 24 22:05:01 Island kernel: mpt2sas_cm0: _base_send_ioc_init: handshake failed (r=-14) Aug 24 22:05:01 Island kernel: mpt2sas_cm0: sending diag reset !! Aug 24 22:05:01 Island kernel: mpt2sas_cm0: diag reset: FAILED Aug 24 22:05:01 Island kernel: mpt2sas_cm0: failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:10651/_scsih_probe()! Or Aug 30 22:28:07 Island kernel: mpt2sas_cm0: sending diag reset !! Aug 30 22:28:07 Island kernel: mpt2sas_cm0: diag reset: FAILED Aug 30 22:28:07 Island kernel: scsi target1:0:4: target reset: FAILED scmd(00000000f7e73a1b) Aug 30 22:28:07 Island kernel: mpt2sas_cm0: attempting host reset! scmd(00000000f7e73a1b) Aug 30 22:28:07 Island kernel: sd 1:0:4:0: tag#0 CDB: opcode=0x1a 1a 00 3f 00 04 00 Aug 30 22:28:07 Island kernel: mpt2sas_cm0: Blocking the host reset Aug 30 22:28:07 Island kernel: mpt2sas_cm0: host reset: FAILED scmd(00000000f7e73a1b) Aug 30 22:28:07 Island kernel: sd 1:0:4:0: Device offlined - not ready after error recovery Aug 30 22:28:07 Island kernel: scsi 1:0:5:0: Device offlined - not ready after error recovery Aug 30 22:28:07 Island kernel: sd 1:0:4:0: [sdg] Write Protect is off Aug 30 22:28:07 Island kernel: sd 1:0:4:0: [sdg] Mode Sense: 00 00 00 00 Aug 30 22:28:07 Island kernel: sd 1:0:4:0: rejecting I/O to offline device Aug 30 22:28:07 Island kernel: sd 1:0:4:0: [sdg] Asking for cache data failed Aug 30 22:28:07 Island kernel: sd 1:0:4:0: [sdg] Assuming drive cache: write through Aug 30 22:28:07 Island kernel: sd 1:0:4:0: rejecting I/O to offline device Aug 30 22:28:07 Island kernel: sd 1:0:4:0: [sdg] Attached SCSI disk Aug 30 22:28:07 Island kernel: mpt2sas_cm0: _config_request: waiting for operational state(count=1) I have opened a Super micro case, but after a few back and forth emails they abruptly closed my case, so I'd doubt they're going to be much help. They did say no one has reported an issue like this, so I could just have a bad board. I have thought about purchasing an LSI 9300-8i, but I don't want to dump 175 bucks if I still have the same issue. I may still buy it since these LSI SAS2008 cards are getting somewhat old and have stopped getting FW updates. Logs: I've attached multiple logs and diagnostics, here are some descriptions: island-syslog-20190824-1009.zip: Drives not detected/usable. island-syslog-20190822-0054.zip: Drives detected/usable. island-diagnostics-20190822-0053.zip: Drives detected/usable. island-diagnostics-20190830-1031.zip: Drives not detected/usable. island-syslog-20190830-1031.zip: Drives not detected/usable. Help me unraid forum, you're my only hope... Thanks! island-syslog-20190824-1009.zip island-syslog-20190822-0054.zip island-diagnostics-20190822-0053.zip island-diagnostics-20190830-1031.zip island-syslog-20190830-1031.zip