Jump to content


  • Posts

  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

eggman9713's Achievements


Noob (1/14)



  1. Just a thing I noted when installing this update. When I was ready to update I clicked on the Update Now link in the banner on the main page rather than going to Tools > Update OS. That bypassed the required acknowledgement of the release notes that you had implemented recently. Probably should fix that.
  2. Update on this. A little over a week ago I got an email that the support rep agreed that the drive should have a 5 year warranty and they were going to look into it with their supervisor. I haven’t gotten another response since then and will likely try to follow up in the next couple of days.
  3. Just a little experience I thought I'd share. I just bought a brand new Toshiba N300 Pro 4TB drive from Amazon (Sold and shipped by Amazon, not a third-party seller). And it showed up with zero hours on the SMART, and was in retail packaging like a new drive should be. I haven't bought a Toshiba drive in many years, and just felt like going with a new manufacturer to add some diversity to my drives. I do this to hedge against bad batches of drives by the same manufacturer. The N300 Pro is advertised with a 5-year warranty on the Amazon listing, the box, and Toshiba's website. I always check the serial numbers of my new and newly-acquired used drives and register for the warranty coverage if the manufacturer allows that. Some don't have a registration process and expect you to just contact them if you have an issue in the warranty period. This is the case with Toshiba. But buried on their websites is a tool to check the serial number. It's only showing a 3 year warranty when it should be 5. Now the N300 normal version is a 3-year warranted drive. But this is an N300 Pro, according to its label and its box. But the actual model number that comes back from the serial number lookup doesn't match what's on the drive or on the box either. Interestingly it also indicates that this drive was sold to an OEM and the warranty is through the OEM. This drive came direct from Amazon so that doesn't seem right either. I have an email into Toshiba customer support to clarify the situation, and I'll let you know what comes of it. TL;DR: Always check your drive serial numbers with the manufacturer to make sure they have the warranty period you expect on them, and register them if the manufacturer allows.
  4. Unraid 6.12.3 has been released in the stable branch and I updated from 6.12.3-rc3 to 6.12.3 stable and still haven't had the problem since before I installed the rc3. See the Announcement post for it.
  5. I just installed 6.12.3-rc3, and I forgot to stop the array before the reboot, and sure enough it got stuck. But I was able to open a web terminal and use umount /var/lib/docker and it stopped the array and rebooted normally. After reboot I also stopped the array, and it didn't get stuck. Rebooted, and that was normal. Stopped the array again, and it stopped and started again normally. So far, the issue seems to be fixed. I'll try stopping and rebooting a couple of times in the next few days. I normally don't reboot my server for weeks at a time, nor stop the array that often. If 6.12.3 still isn't a stable release in the next week I'll provide an update.
  6. I just had my array get stuck while stopping today. The Docker image remained mounted, but Docker had already stopped. Unmounting the /dev/loop2 device from the command line per ljm42's instructions allowed the array to finish stopping as normal, and it started back up clean (I didn't shut down or reboot the server).
  7. The parity check on my last unclean shutdown finished with zero errors, as it always seems to whenever I have an unclean shutdown. That lends some credence in my mind to the thought that everything is stopping but just something is holding up the disks unmounting. I did some testing this evening with the following results. 1. Stop array while watching the log. Everything shuts down as normal and the array stops quickly. 2. Reboot the server while the array is stopped. System comes back up, autostarts the array (I forgot to turn that off for testing), and it came up clean. 3. Stop array again while watching the log. Everything stops normally. 4. Start the array. Normal. 5. Reboot the server while the array is running. Watching the log. Everything shuts down properly, the system reboots normal and clean. 6. Now, I shut down the server while the array is running, rather than rebooting it, because that is when it last happened to me last night. System shuts down normally, and when I power it back on, it comes up clean. So now it seems to be behaving itself on my server. But I do have the logs from the last time it happened showing the behavior. I'll try to do some more testing in the next couple of days.
  8. I have also noticed this behavior after upgrading to 6.12. I am currently on 6.12.2. I shut down my server to remove an unused cleared drive, and it came back up as unclean. I am waiting for the parity check to finish, and am going to extend my shutdown times for disks (90 to 120 seconds), VMs (60 to 90 seconds), and Docker (60 to 90 seconds) to see if that fixes it, but according to my syslog.txt file which I have posted a snippet of below it appears it is my /mnt/cache_nvme device that keeps being busy and fails to unmount. Docker and libvirt appear to shut down gracefully, the array disks appear to unmount properly, as well as my ZFS disk pool that I created as an experiment to test 6.12's new ZFS functionality. It tries to unmount cache_nvme like this repeatedly after everything else has stopped, until the log file ends before it powers off. (Kveer is my server name) Jul 5 22:32:59 Kveer emhttpd: shcmd (1269849): umount /mnt/cache_nvme Jul 5 22:32:59 Kveer root: umount: /mnt/cache_nvme: target is busy. Jul 5 22:32:59 Kveer emhttpd: shcmd (1269849): exit status: 32 Jul 5 22:32:59 Kveer emhttpd: Retry unmounting disk share(s)... Jul 5 22:33:04 Kveer emhttpd: Unmounting disks... Jul 5 22:33:04 Kveer emhttpd: shcmd (1269850): umount /mnt/cache_nvme Jul 5 22:33:04 Kveer root: umount: /mnt/cache_nvme: target is busy. Jul 5 22:33:04 Kveer emhttpd: shcmd (1269850): exit status: 32 Jul 5 22:33:04 Kveer emhttpd: Retry unmounting disk share(s)... Jul 5 22:33:09 Kveer emhttpd: Unmounting disks... Jul 5 22:33:09 Kveer emhttpd: shcmd (1269851): umount /mnt/cache_nvme Jul 5 22:33:09 Kveer root: umount: /mnt/cache_nvme: target is busy. Jul 5 22:33:09 Kveer emhttpd: shcmd (1269851): exit status: 32 Jul 5 22:33:09 Kveer emhttpd: Retry unmounting disk share(s)... In my case, the cache_nvme device is an M.2 nvme drive in my server that is btrfs mirrored to another one and those two drives are set up a cache pool. I have my system share there as well as my syslog folder, domains, appdata, and nextcloud docker storage share. Basically, anything that I want to be fast and accessible and doesn't need to be on the array. Interestingly, I purposely don't have disk shares enabled or used at all. In the Global Share Settings, I have Enable Disk Shares set to "No". Maybe that's just an overlap in terminology? The parity check should finish in the morning and after I get home from work tomorrow night I'll try to investigate further.
  9. I was having this same issue recently until I just noticed today that it seems to be fine. I don't think I've changed anything recently on my server that would affect this. I'm accessing my server using Firefox 88.0.1, and unfortunately I don't know what version of Firefox I had the last time I noticed the web terminal was hard to read.
  10. I first heard about Unraid about a year ago and have been experimenting for a while now. I'm a home power user that likes tinkering with things and keeping my fleet of devices running, but I'm definitely not a developer or a deep knowledge user. I finally got together the time and budget to build my first proper server. This will enable a lot of my archiving and deduplication projects that have been on hold for months. My original budget was USD$1,500, but due to some parts I wanted being unavailable, forcing me to either wait or buy up a tier of part, and parts that were commanding higher than normal prices, the total build was approximately USD$1,800 not including the UPS, Unraid license, or the rolling cart I got for the tower. It was a great opportunity to slap on my Unraid case badge I won during the 100,000 forum member giveaway! I'm now experimenting with lots of apps, docker containers, VMs, etc. to figure out what I want this server to do in the long term. OS at time of building: UNRAID 6.9.2 CPU: Ryzen 7 3700X 8-core 16-thread Motherboard: ASRock Rack X570D4U RAM: 32GB NEMIX RAM 3200MHz DDR4 ECC Unbuffered (2x16GB) (not on the motherboard QVL but worked out of the box) Case: Silverstone CS380 Power Supply: be quiet! Straight Power 11 750W 80plus Platinum Fully Modular ATX SATA Expansion Card(s): None Cables: Monoprice SATA cables Fans: Stock case fans (1x120mm exhaust at rear and 2x120mm on drive cage), stock CPU fan (AMD Wraith Prism) UPS: APC Back-UPS 1000 Parity Drive: Seagate Ironwolf Pro 4TB (new) Data Drives: Seagate Ironwolf 4TB (new), 2x Western Digital Red 1TB (from inventory and retired NAS) Cache Drives: Crucial MX500 SATA SSD 500GB (new), Corsair Force MP510 NVMe SSD 480GB (new) Total Drive Capacity: ~10.98 TB minus parity and overhead Primary Use: Multipurpose/Experimental Server Likes: ECC memory support, hot swappable drive bays, IPMI functionality on motherboard including remote control and console view, easy UPS integration. Dislikes: BIOS has an overwhelming (to me) number of options and the onscreen explanations aren't always great. Drive sleds have a "light pipe" that brings light from the activity LED on the backplane to the front. These are not very bright and are not easily visible when the case door is closed. Case filters are not easy to remove or re-install, especially the bottom one. Case includes PCIe slot covers that have openings in them and are unfiltered. No USB 2.0 header on motherboard for internal USB devices (like the CPU cooler, see below). Drive backplane has a lot of exposed capacitors on it that I'm afraid I will break off accidentally. Case construction is acceptable, but not fantastic. Add Ons Used: Unassigned Devices (and plus), CA Mover Tuning, Dynamix Active Streams, Dynamix Local Master, My Servers, Nerd Tools, Parity Check Tuning, Preclear Disks, User Notes. Future Plans: Upgrade to 2.5gb or 10gb networking (my whole network is 1gb right now). Add SSD cages to the 5-1/4" bays. Plex media server, consolidate documents, photos, videos, etc. that are spread across multiple devices, network services host (DNS, NTP, email, etc.). Boot (peak): 85W Idle (avg): 45W Active (avg): 130W (Folding@Home + Parity Check running) Light use (avg): 60W (a few Dockers and a Plex music stream running) Other notes: My cable management isn't fantastic, especially the backplane SATA cables. It looks like a rat's nest but the way I did it, I installed them and let them naturally figure out where they seemed to want to go, and tied them together somewhat. The AMD Wraith Prism cooler comes from the factory set to a rainbow pattern. It comes with an addressable RGB cable and a USB 2.0 cable to control it. Neither of which this motherboard has. So, if you really want to turn off the RGB, you will need to install the CPU cooler temporarily on a board that has a USB 2.0 header, or get a USB-A to USB 2.0 header adapter, and use the Windows utility to change it. The motherboard has a dedicated IPMI network port, but IPMI is also presented by default on the LAN1 port if you only want to connect one network cable. Edit February 2022: This server has been running stable for about 9 months. I had to make a couple of tweaks to its settings to increase how long it waits for Docker and VMs to shut down and the array to stop before it forcibly does so uncleanly, forcing an annoying parity check. Mother nature also took the liberty of testing my UPS settings in a real-world outage a while ago. Shut down cleanly and came back up when power was restored. I also made a couple of small upgrades since the original build. I added a second NVMe SSD so I could have a mirrored cache pool. With that in place I felt comfortable enabling caching for most of my shares, and setting some shares to prefer that cache pool for better read speeds as well. I also added another 1TB Seagate Ironwolf drive that came out of my other NAS that I just upgraded. I've been having some minor issues with transfer bottlenecks that I am still trying to resolve, but overall it is still running well!
  11. I was having the same issue with my new server running 6.9.2. I changed the Docker container timeout from 10 seconds to 20 seconds (left the disk settings shutdown at 90 seconds and VM at 60 seconds), and that seems to have fixed the problem. The first time this happened I let the parity check run to completion (7 hours on my array) and it found zero errors. So it seems to be an annoyance at this point more than anything dangerous to the array data.
  12. I'm still very much a novice and am still learning how to use Unraid but so far I like how versatile Unraid is for use in applications scaling from a tiny home server to a multi-terabyte corporate NAS. I would like to see the ability to install Unraid on an internal drive, which in my experience is sometimes more reliable than simply running it off a USB thumbdrive.
  • Create New...