Alexandro

Members
  • Posts

    308
  • Joined

  • Last visited

Everything posted by Alexandro

  1. Just saw now 6.10.2 available and decided to try to upgrade from 6.9 to not much different result. The bad flash drive banner appeared again (brand new flash drive). The VMs and Dockers started and working although unraid itself tells me that to access the docker/VM page the array must be started first. The drives are reported by unraid that are offline, which is false (the drive appeared to be online and user shares presente). I do not know what exactly has been changed but obviously problems are still there. Take a look on some screenshots and also the diagnostics log attached. Back to 6.9.2 again. unraid-diagnostics-20220528-1423.zip
  2. Although I was very much confident it is not a flash drive related decided to follow your advise. New flash drive prepared. Booted to the same behavior. Once again went back to 6.9.2. No more experiments with the production server for me.
  3. Downgraded back to 6.9.2 and issues solved. Reading other users issues we might conclude that 6.10 is buggy enough not to be considered as stable.
  4. What I've noticed is that the server initiated parity check, because (after the restart following the update) an unclean shutdown has been detected. As a result according to my observations the parity check is blocking the normal view of the web interface. Once I press on "restart", a moment before a restart is completed the normal web interface page view is showing up. In this moment the parity check is stopped. As no buttons for "stop parity check" are available at the moment (but the check is in progress) How could I stop parity check via command line? Haven't found any command for this. All disk although mounted, are reported by unraid as "off-line" 10 years unraid user here and never had any problem with update so far.
  5. Unfortunately this is not the problem. Cleared browser cashe with no result. Sent from my iPhone using Tapatalk
  6. Hello, Just upgraded from stable to 6.10.0. Many problems after the upgrade. I was able to login but on the array screen got "Flash drive offline or corrupted" message. The array has started, dockers and VM too, but no usual information for the disks in the array are visible anymore. No control over the array. Buttons for start/stop array disappeared. Please kindly see my diagnostics and some screenshots. unraid-diagnostics-20220519-0914.zip BB267506-C135-4A75-9ABC-F569362B6780.heic 28CC8DE8-57EA-4DB7-ABEB-9EE50F436B43.heic
  7. Here for sale is a Supermicro X9SCA-O motherboard. It comes in bundle with an Intel XEON E3-1220 v.2. 16GB ECC RAM included (4x4GB). The stock Xeon cooler is installed. As a bonus I will include the following: 1. HYPER TX3 EVO cooler. 2. USB card to extend the port needs. The original packaging of the motherboard, the user manual, 3 sata cables and I/O shield are included in this sale. The parts have been pulled up from working environment and have been checked very well. All tests passed. The parts are coming from pet and smoke free environment. I am looking for 170 euro (PayPal friend and family) shipping included.
  8. Thank you. "Previous Apps section" is new to me. Will check now. Thank you very much for this. Greatly appreciated.
  9. I had my docker img. file on a WD cache drive. Decided to move it to my SSD which is an unsigned device. Pointed the path of the image to the new location and all my dockers were absolutely fine. 2 days later, while examining the logs I noticed strange messages: Searched the forum and found out others has experienced the same problems. I understood that the docker.img file went bad and needs to be recreated. My appdata fold is on the same unassigned device. Could you please confirm the steps needed: 1. Stop the array; 2. Delete the old docker.img file; 3. Create new docker img.file; 4. Start the array; 4. Go to Community Applications; 5. Download the dockers I need; 6. The old paths and data of the dockers should be the same; Thank you in advance. p.s. Attached the diagnostics file unraid-diagnostics-20210317-2011.zip
  10. Here for sale is a Supermicro X9SRL-F motherboard (rev.1.01) Key Features 1. Single socket R (LGA 2011) supports Intel® Xeon® processor E5-2600/1600 and E5-2600/1600 v2 family 2. Intel® C602 chipset; QPI up to 8.0GT/s 3. Up to 512GB ECC DDR3, up to 1866MHz; 8x DIMM slots 4. Expansion slots: 2 PCI-E 3.0 x8, 2 PCI-E 3.0 x8 (in x16), 2 PCI-E 3.0 x4 (in x8), 1 PCI-E 2.0 x4 (in x8) 5. Intel® 82574L Dual port GbE LAN 6. 2 SATA3 (6Gb/s), 4 SATA2 (3Gb/s), & 4 SATA2 (3Gb/s) ports via SCU 7. Integrated IPMI 2.0 and KVM with Dedicated LAN 8. 9 USB 2.0 ports (2 rear + 6 via header + 1 Type A) 9. DOM power connector support The motherboard has been pulled out from a working UNRAID environment. The original box is presented. No I/O shield included in this sale. It can be obtained elsewhere for pennies. International tracking number shall be provided. I am looking for 110 euro shipped worldwide via registered post. PayPal (friends and family) or SEPA transfer accepted only. Thanks for looking.
  11. Smooth update to 6.9.0 from 6.8.3. All dockers/VMs started as expected. All my disks reported "back to normal utilization levels" message. Thank you.
  12. Hi all, I have exchanged my trusty old APC unit to a newer version. https://www.apc.com/shop/pk/en/products/APC-Smart-UPS-C-1000VA-LCD-RM-2U-230V/P-SMC1000I-2U The new unit is serving my needs and power requirements well, although i noticed important statistics are not visualized by the APC daemon. Nominal Power, UPS Load and UPS Load % are missing. I have researched the forum but couldn't find an answer. The APC daemon is obviously not getting any updates anymore. NUT is behaving the same way. Is there any way to tweak something in APC Daemon? My UPS is not supporting firmware updates so it cannot be solved this way from the UPS side. Will appreciate any tips. Unraid Pro 6.8.3 version.
  13. Dear johnnie.black, Thank you very much for your reply. You are right. The disc was the problem. Once the 3tb has been passed the rebuild went with full speed. Although some really weird results. I have already rebuild successfully Disk3 and decided to preclear the faulty disc. To my surprise the preclear went smoothly reaching almost 120MB/sec. I haven't noticed any change in the parameters mentioned earlier after the preclear. I have decided to give it a try and again put it under operation in the array. The results were similar. So this disk is obviously dead now. Thank you also mentioning my full discs explaining the warnings in the log. I'll make sure to free up some space according to your recommendation. Best regards
  14. First thanks in advance for your input. Due to its age (more than 7 years old), one of my disks reported some reallocated sectors and has been replaced with a brand new WD40PURZ. Initially the rebuild started with unusual slow speed of 30 MB/s only to find out that it additionally slowed down to 3-4 MB/s. One of the other drives in the array (disk3-3tb) is showing concerning smart results as well (read raw errors) and scheduled to be exchanged next. While rebuilding I can see disk3 doubled the read raw errors. Both Disk1 and Disk3 are connected to 16 port LSI controller and sharing the same SFF cable. It deserved to be mentioned that during the disk exchange I haven't moved any cables as drive cages are involved. All cables in the system are Supermicro branded and seated properly. The PSU is 850W Corsair. The parity drive is connected to the MB Sata controller. 16 of the array disks are connected to a 16 port LSI controller and 2 disks to a Dell Perc H310. My usual parity check speeds were close to 130-140 MB/s. Can I blame disk3 to be the bottleneck (due to read raw errors) or this is more like a controller/cable/PSU related issue? Unraid 6.8.2 Diagnostics attached. unraid-diagnostics-20200307-1753.zip
  15. Balena Etcher (or similar application) would be very useful for people who have card-readers installed in their rigs. https://www.balena.io/etcher/