Alexandro

Members
  • Posts

    321
  • Joined

  • Last visited

Everything posted by Alexandro

  1. Thank you. "Previous Apps section" is new to me. Will check now. Thank you very much for this. Greatly appreciated.
  2. I had my docker img. file on a WD cache drive. Decided to move it to my SSD which is an unsigned device. Pointed the path of the image to the new location and all my dockers were absolutely fine. 2 days later, while examining the logs I noticed strange messages: Searched the forum and found out others has experienced the same problems. I understood that the docker.img file went bad and needs to be recreated. My appdata fold is on the same unassigned device. Could you please confirm the steps needed: 1. Stop the array; 2. Delete the old docker.img file; 3. Create new docker img.file; 4. Start the array; 4. Go to Community Applications; 5. Download the dockers I need; 6. The old paths and data of the dockers should be the same; Thank you in advance. p.s. Attached the diagnostics file unraid-diagnostics-20210317-2011.zip
  3. Here for sale is a Supermicro X9SRL-F motherboard (rev.1.01) Key Features 1. Single socket R (LGA 2011) supports Intel® Xeon® processor E5-2600/1600 and E5-2600/1600 v2 family 2. Intel® C602 chipset; QPI up to 8.0GT/s 3. Up to 512GB ECC DDR3, up to 1866MHz; 8x DIMM slots 4. Expansion slots: 2 PCI-E 3.0 x8, 2 PCI-E 3.0 x8 (in x16), 2 PCI-E 3.0 x4 (in x8), 1 PCI-E 2.0 x4 (in x8) 5. Intel® 82574L Dual port GbE LAN 6. 2 SATA3 (6Gb/s), 4 SATA2 (3Gb/s), & 4 SATA2 (3Gb/s) ports via SCU 7. Integrated IPMI 2.0 and KVM with Dedicated LAN 8. 9 USB 2.0 ports (2 rear + 6 via header + 1 Type A) 9. DOM power connector support The motherboard has been pulled out from a working UNRAID environment. The original box is presented. No I/O shield included in this sale. It can be obtained elsewhere for pennies. International tracking number shall be provided. I am looking for 110 euro shipped worldwide via registered post. PayPal (friends and family) or SEPA transfer accepted only. Thanks for looking.
  4. Smooth update to 6.9.0 from 6.8.3. All dockers/VMs started as expected. All my disks reported "back to normal utilization levels" message. Thank you.
  5. Hi all, I have exchanged my trusty old APC unit to a newer version. https://www.apc.com/shop/pk/en/products/APC-Smart-UPS-C-1000VA-LCD-RM-2U-230V/P-SMC1000I-2U The new unit is serving my needs and power requirements well, although i noticed important statistics are not visualized by the APC daemon. Nominal Power, UPS Load and UPS Load % are missing. I have researched the forum but couldn't find an answer. The APC daemon is obviously not getting any updates anymore. NUT is behaving the same way. Is there any way to tweak something in APC Daemon? My UPS is not supporting firmware updates so it cannot be solved this way from the UPS side. Will appreciate any tips. Unraid Pro 6.8.3 version.
  6. Dear johnnie.black, Thank you very much for your reply. You are right. The disc was the problem. Once the 3tb has been passed the rebuild went with full speed. Although some really weird results. I have already rebuild successfully Disk3 and decided to preclear the faulty disc. To my surprise the preclear went smoothly reaching almost 120MB/sec. I haven't noticed any change in the parameters mentioned earlier after the preclear. I have decided to give it a try and again put it under operation in the array. The results were similar. So this disk is obviously dead now. Thank you also mentioning my full discs explaining the warnings in the log. I'll make sure to free up some space according to your recommendation. Best regards
  7. First thanks in advance for your input. Due to its age (more than 7 years old), one of my disks reported some reallocated sectors and has been replaced with a brand new WD40PURZ. Initially the rebuild started with unusual slow speed of 30 MB/s only to find out that it additionally slowed down to 3-4 MB/s. One of the other drives in the array (disk3-3tb) is showing concerning smart results as well (read raw errors) and scheduled to be exchanged next. While rebuilding I can see disk3 doubled the read raw errors. Both Disk1 and Disk3 are connected to 16 port LSI controller and sharing the same SFF cable. It deserved to be mentioned that during the disk exchange I haven't moved any cables as drive cages are involved. All cables in the system are Supermicro branded and seated properly. The PSU is 850W Corsair. The parity drive is connected to the MB Sata controller. 16 of the array disks are connected to a 16 port LSI controller and 2 disks to a Dell Perc H310. My usual parity check speeds were close to 130-140 MB/s. Can I blame disk3 to be the bottleneck (due to read raw errors) or this is more like a controller/cable/PSU related issue? Unraid 6.8.2 Diagnostics attached. unraid-diagnostics-20200307-1753.zip
  8. Balena Etcher (or similar application) would be very useful for people who have card-readers installed in their rigs. https://www.balena.io/etcher/
  9. Thanks for your input. May I exclude a disk/cable issue as the rebuilding process (with the same disk) finished successfully?
  10. Hi there, In need of some help here. During the scheduled monthly parity check one of the disks has thrown a bunch of errors and consequently disabled by the system. The disk has passed the short smart self test and then successfully rebuilt by the parity. The system is in a working state now. Due to change of the temperature environment of the place (it is now 22 degrees Celsius) first guess was that I am facing a heat related issue. It deserves to be mentioned the SAS controller is installed next to a DVB-S card which by default runs hot. Would appreciate any comments regarding to the situation and what was the trigger of this event. Diagnostics file attached. Hardware configuration in my signature. Thank you very much in advance. unraid-diagnostics-20200116-0748.zip
  11. Hi guys, here for sale is a Dell Perc H310 (flashed to IT mode). - price is 50 euros. No longer needed as upgraded with 16 port controller. I can ship anywhere in the world. Please send me your destination and I will check the shipping cost with the post. Best regards.
  12. Wow...Just wow. Thank you very much for your efforts. It looks lovely.
  13. Hello Mex. Yes, 6 drive cages version (3 from every side) would be great. Please kindly find attached the closeup of the logo you requested (and sorry for the dust). Best regards.
  14. Dear Mex, It would be great if you could make an icon for Lian Li PC-343B case with 6 Supermicro 5x3 cages. Currently have only 4 but soon the free 5,25 slots will be populated to the fullest. Thanks in advance. BR
  15. Dear Squid. I would just like to confirm that following your advise resulted to a fully working system. Once again, thanks for the support.
  16. Dear Squid, I appreciate your help very much. Thank you. No error messages with this additional path mapping. I will wait till tomorrow when new downloads are scheduled and will report here for the sake of good order.
  17. Thank you Squid. I understand. I will switch to transmission docker. Could you please let me know what you mean to map /mnt to /mnt in Sonar container. I do not see such setting. Do I need to add a custom path? Sorry for the newbie question.
  18. The transmission is a plugin. Not a docker. Here is a screenshot. Do I need to have transmission as a docker?
  19. Thanks Squid. Transmission is downloading straight on the cache drive. The full path to it is:
  20. Dear all. The last few years I was using Sonarr with Transmission with a great success. As my unraid server version was quite old I have decided to update. I am using a transmission plugin (not docker) and the latest Sonarr. However the problem is once Sonarr tells Transmission to download the series and it finishes, Sonarr cannot move it to the cache drive where it should be located. The settings are exactly the same as they were prior Unraid update. The Data folder at Sonarr docker points to the folder where transmission is downloading and seeding. Here is an excerpt from the log file: Please kindly see my screenshots attached: Thanks for your time in advance.