Hastor

Members
  • Posts

    43
  • Joined

  • Last visited

Everything posted by Hastor

  1. Ah, you're correct. I don't think I ever touched that setting, is it default to never spindown? I know when I haven't used it for a while, I typically hear more noise, and have to wait a second before it becomes responsive, same when accessing a file from a different disk on the array, so it always seemed like a spindown delay was there, but I'd never seen the * as the temp.
  2. All of my disks are SATA, I even hear (and wait) for them to spin up, but I have never seen a * for the temperature. They always maintain a temp a few degrees above room temp for ones in external enclosure (28c in a 24c room), and the ones in the server tower a couple degrees more. Even if I haven't accessed them for days. Even if I click spin down, I never see a * - is something wrong? Sharing my config. All my drives are Seagate, either Ironwolf or Exos. megachurch-diagnostics-20230709-1645.zip
  3. I touched on this in another thread where I was asking a few things, but based on the answer, and what I see when I click the ? for info in Unraid on my array, the temperature should read * when my drives are spun down. I've never seen a * under my temperatures for any of my drives, always temperatures, for years. I'm using Seagate HDDs for my array, and Crucial SSDs for my cache, and have used Samesung and WD SSDs. Always seen a temp on everything. Most specifically, in Unraid it says (copy/pasted): "We do not read the temperature of spun-down hard drives since this typically causes them to spin up; instead we display the * symbol." The thing is, I could swear I hear my drives spinning down. When I haven't accessed a drive for a while, and request a file from it, there is typically a brief delay before the first file comes in, then it is instant from then. This seems like spin up/down behavior to me (and sounds like it when I'm nearby enough to hear it). But I've never seen a * in the temp to indicate a spun-down drive. Someone replied in my other thread saying they saw * when spun down, so what gives? I appreciate any help! Thanks! Diagnostics attached megachurch-diagnostics-20230518-0110.zip
  4. Just curious about a couple things: - If I hover around the HDD temps and click when the cursor is a ? for more info, it says that temperatures won't be read for spun down drives, and a * will be shown instead. I just noticed that. For a couple years now, I've heard my drives spin up and down, but always seen a temp, never a * show up. My drives ARE spinning down right? - If I click 'spin down all drives' or 'spin up all drives', how long is that for? If a drive is in use, or a file from it is requested, I'd assume it would spin up. Does spinning up just spin down after the normal idle time before a spin down? Thanks, just wanna be sure I understand what some buttons do, and that my drives are spinning up/down properly.
  5. This is when they are green. It says "Normal Operation, Device is active. Click to spin down device." If I hover the green dot. However, capabilities/attributes show not available most of the time. I even tried immediately after copying a file to the cache, and refreshing a few times to see if it appeared, so I know it is active, but neither drive will give me stats. I also tried after running mover, with a small file on the cache. Sometimes they do show though... I wish I knew what determined when that was. Updating to note that if I do a transfer that takes long enough, and check while the file is being transferred to the cache, I can see the info at that point. They are always green though. My previous Samsung SSDs for cache had this info avilable every time I looked, these WDs seem to work just fine but only provide info when transferring.
  6. Well this is odd @JorgeB, now they are unavailable again. I haven't changed anything or rebooted, though I did copy some files to the cache, whilch are still on it until mover runs. The disk info seems to come and go, but the cache works fine. Sorry to respond so many times but things kept chaning and throwing me off. Diagnostics in comment above. Thanks again for any insight!
  7. Nevermind, this info is available now. I expected it to be available immediately. I'm out of SATA ports so I shut down, replaced one cache drive, rebooted, put the new drive in the empty slot, waited for the array to start and btrfs to finish its mounting, which took a bit. Then I did the same for the second drive. Then I rebooted again for good measure. Though I haven't rebooted since last time I checked. I guess it just takes some time to appear after installing a new drive... I didn't even consider it could be delayed.
  8. Here's Diagnostics. I normally do include them, but I thought WD drives were probably common and it might just not work with them. Thanks for checking it out! megachurch-diagnostics-20221119-0512.zip
  9. I just upgraded my cache from 2x500GB to 2x1TB with two WD Blue SSDs, mirrored. When I view the disk info, the Attributes and Capabilities tabs have 'not available' messages. I couldn't find much about it searching. One reason I replaced them as well was error messages from those attributes on my other disks, and it seems I'm blind to them now, but I wouldn't expect WD to be that rare or odd as for as drives go. I was previously using Samsung Evo SSDs, but one was getting an increasing reallocated sector count, then worse errors. I replaced it with another EVO. Same thing happening with it a few months later (the other has been perfect though), so I decided to change brands and increase the size. Now I'm worried the same thing will be going on and I won't know about it. If that's going to be the case with these drives, would it result in corrupt files potentenially being moved, or does btrfs somehow notice if something doesn't match up?
  10. Just one last post to confirm it wasn't monitor related. Bought a new PSU for the monitor I normally use cause it is packed up for moving somewhere. Same results though. I expected that, but my older monitor displayed it a little better. In 6.9.2 the VGA cable I use makes a difference as to whether it displayed cleanly, or zoomed in. I'm using a cable that displayed better at that point, I'm guessing some cables don't have a pin to communicate resolution etc. Weird that other T140 users aren't complaining so far. I guess I primarily use it headless and can still get to the terminal, so if network dies, I can still get to stuff. I'll just have to search/ask in the forums as I'm used to the GUI! Hopefully that won't be an issue! If I find a solution I'll let you know. This server doesn't support added GPUs as the primary screen or I'd just add one. If there's any chance it is a different driver than the one listed above being loaded for my card, let me know, but I'm guessing you were able to see in the diagnostics. Something worked in 6.9.x and at least the first release of 6.10.x (only one I tried) that changed for my graphics in any case!
  11. I'm definitely not the only T140 user here, but I haven't heard from others. I'm gonna try a different monitor tomorrow, that Unraid 6.9.2 displayed much better on for some reason (but only with certain VGA cables). I'm moving and its power supply is packed so I ordered a same-day delivery PSU for it to test today. I borrowed this monitor from my partner for the upgrade, but it DOES get video on 6.9.2. Just to be sure, should I be putting that line in the box you showed, or the one under it for GUI? Is that definitely blocking the right driver for the chipset showin in the screenshot I posted above? I really appreciate your help, I just really need to use Unraid for hopefully a few more years on this machine! I need to upgrade my cache and 6.9.2 has a bug with doing that! Currently on this machine, 6.9.2 = cache upgrade bug, 6.10.x = no network, 6.11.3 = no video. I feel like I'm not having very good luck lol.
  12. That didn't change anything. I also tried adding it in the same place in the GUI box below that one since that's the issue, and I thought maybe it wasn't running that line since I was booting to GUI, but that didn't change anything either. I reverted to my backup of 6.9.2, GUI came back, and then I upgraded again before attempting the above just to be sure I had a clean upgrade. Nothing I tried gave me a GUI over VGA again on the current version. I have always noticed on 6.9.2 that Unraid runs very large by default on my 1080p monitor, and I have to scroll left/right to see everything, or zoom out the browser. Were there any changes in resolution since 6.9.2? This is the info I can find on the server I'm using: I am using a very vanilla install. Anything else I can try/provide? I do appreciate the assistance very much!
  13. I just got around to updating my Dell T140, to 6.11.3 and am having the same issue. I was on 6.9.2 before, as I had network interface issues last time on 6.10.x and rolled back and waited a bit. Network is great, can get to gui there. I see all the boot stuff but it goes blank when it would show the GUI on local VGA. I wasn't sure if I should make a new thread or not since this is marked solved but describes my issue exactly. Updating to note that it uses Matrox video according to Dell's specs. Diagnostics attached. Please help! Thanks! megachurch-diagnostics-20221116-2231.zip
  14. Sorry to necro a thread, but I'm still on 6.9.2 - I'm only using it as a SMB file server so it is still suiting me fine, but I do like to stay up to date. Does anyone know if extra steps are still necessary in the current version (6.11.1)? Was this ever fixed? I did attempt to update to 6.10.2 when it was new and got the message that it couldn't find ETH0 (I believe that was it), as described here and just restored my backup, figuring it would eventually be addressed.
  15. I have the minimum free space on my array set to 150GB. This overrides any setting you have for the cache, or I would have it a bit bigger for the disks to ensure free space, and smaller for the cache. In any case, my cache min space is set even lower than that, so it isn't a factor here. When copying files to the cache, it will still have 170GB free at times when it switches to writing to the drives in the array. Why doesn't it use the last 20GB always? I'm not using anything other than the share - no VMs, Dockers, etc. On latest version of Unraid etc. Not the biggest deal, but I like to understand what's going on.
  16. Remembering a little better now, I'd removed the share completely, as only my user share was on each of my disks. I'm not sure why the iso share reappeared then. I just noticed there was more than one folder on a disk which isn't usual. My user share should have been the only thing on the actual array. I've deleted the share again, but if it matches up with the post from 2017, it might appear again.... I shall see.
  17. I'm not using VMs, the VM service hasn't run since I first set up my server. I have the isos share set to use the cache. However, I sometimes have an empty isos folder appear on one of my storage drives. I expect it to stay there (and empty) on my cache array, but why is it appearing on a storage drive? Under Cache, it is set to YES:Cache (not prefer). I found a thread from 2017 describing the exact same thing, but the thread ended with no real solution or explanation (just another member saying it might be a bug, but I don't think so given it has been 4 years). Anything I'm missing here, or explanation anyone can provide? It isn't really hurting anything, but I like to know why my stuff is doing what it does. Attached diagnostics megachurch-diagnostics-20210818-2334.zip
  18. I am planning on upgrading a couple disks on my array soon as it is getting a bit full. Just waiting for drive prices to normalize a bit, so I know I should increase my space. Anyway, I saw my Disk 1 was approaching 90% utilization. I didn't want to get an email from the warning, and I'm keeping a close eye on my disks manually, so i set the Warning Threshold for all my disks to 0, which says it should disable it, including warnings. I set the Critical warning threshold to 99% for now. I still got an email from my server, startling me of course! It was a warning email saying my disk was at 91% usage. Why did this happen? is there another setting that would trigger this which I'm missing? I changed this setting a while back, not sure but over a week ago at the very least. I've stopped the array, rebooted, etc. since then and see the settings for my disks are still 0 for warning, 99 for critical. My emails are set to be sent for warnings and alerts. This email says it is a warning. I'd like to stop it before it happens on the next disk! Attaching my diagnostics just in case it helps. megachurch-diagnostics-20210624-0135.zip
  19. I did not bundle them but these connectors come with the cables in a single bigger cable that breaks out for the last few inches. That's how they are always made and used. The shortest I can find anyone even making is 1.6ft. which is what I'm using. This is for an HBA330 NON-RAID with the SFF-8643 connectors. Seems to be industry standard to make them this way, should I cut off the 'sleeve' holding the cables together? Of course they'll always be bundled together on the end that plugs in the controller.
  20. It's a breakout cable from a sad controller to 4 drives. Only this drive has reported issues. Almost sounds like I should do a parity verification after the reseat and see how it goes. This cable had only been in use a month or so with no issues and not moved recently. Should pick up a spare though.
  21. I've been running this array, at least with all the disks in place etc, for I guess a little over a week. My 1st parity drive triggered emails to me about UDMA CRC errors today when mover ran. It reported 12517 errors. I turned off, reseated some cables, rebooted, and it reported 449 errors. Did another cable reseat/reboot and now running a SMART test on it. No errors reported since the last couple reboots. This was a somewhat small move, as I've moved a lot over the past couple days, filling the cache drive. I don't know exactly, but I don't think I had more than 20GB or so this time when it happened. Just thoughts on what I should look into to declare this 'probably a cable issue' or 'replace the drive'. This drive is about a year old, not as old as the 10TB drives in my array. Being parity, it was getting worked harder though. This is a Seagate Ironwolf. If a Smart test passes, should I go with it, or look into replacing this drive? Just noticed the spike in prices for 16TB drives just since I bought a couple more a little over a month ago. I can't afford to replace it now. In that case, I guess my option is to shut things down and wait for drive prices to be normal again, or go single-parity, which would require being unprotected while it rebuilt. I'd be down for any other suggestions! Attaching Diagnostic. I did disconnect some other drives during a few reboots, as I forgot which physical drive this was. Array doesn't auto-start so it didn't start missing any disks, other than the 1st parity disk that was taken off due to the errors. This diagnostic is taken right after I started the extended Smart report, which is still running. If we need to wait on that, it's fine. Just 'first failure' concerns and don't want to spend money I don't need to or give up a drive it is ok. This drive, along with some others in the array, were just moved from a Drobo that ran for a couple years with no problem. A couple of the drives are brand new as well. I appreciate any insight or thoughts! I know this might be a little premature with the SMART still running, but if anything can be told from the logs, I'd like to know. Of course with this being rather new, it's worried me! I guess it is still under warranty with Seagate, but I've never used that process and don't know what their criteria is to consider a drive 'not working' megachurch-diagnostics-20210522-1219.zip
  22. Yeah it isn't a ton, but at the rate it was growing, it seemed like a lot, if it got to 400MB after only a couple days uptime on a 500GB cache, if it kept up that rate... that was my main worry. It was seeming like a bigger % in my mind all the same as I just got off work and am tired and didn't think it fully through as well! As long as it doesn't add another 200MB every single day, I think it should be fine.
  23. I've got a fairly recently set up array, using mostly default settings. It's my first Unraid array, but things have been going well. However, I've had an interesting issue and wonder if it is a setting I am missing, or something I can clear... I have a cache pool of 2x500GB SSD drives. Every day when I see mover has run, there is less free space on the cache. It now has over 400MB used, and was just over 200 a day or two ago. If I click the folder to browse from the UI, it contains only empty folders (one is empty except for 2 empty subfolders). As I'm using this strictly for NAS, I deleted the other files that Unraid might put here via the Unraid UI for now, before I'd even added the cache. Cache was added with purely default cache settings aside from setting a min free space. I have moved a large amount of files in the past couple days, to where the cache got full and it wrote directly to the storage drives a couple times. However, as of now, according to browsing files via the Unraid UI, as well as checking the cache via ssh, the cache has nothing on it but empty folders. The Cache when viewing the 'Main' tab says it has 416MB on it though. Over 400MB is a fair amount on a 500GB disk, especially when it is growing. Is there a log that's increasing? I need to figure out how to get this to stop in any case, as my cache will be useless before long at this rate... My cache and array are both set to min 150GB free space, so it is eating up an even higher % of what I want to use. I'd expect a little tied up due to formatting, but it was tiny to start with. Diagnostics are attached, I appreciate any help! As usual, sorry if this is covered, but I searched, and haven't found anything so far. EDIT: Stopping and starting the array seems to have cleared it back to only 4MB used, just like it started. I don't want to stop/start my array every couple days though... but thought this may be helpful information. Diagnostics are from before stopping the array while the seemingly phantom space was taken. megachurch-diagnostics-20210519-2100.zip
  24. This - in researching before buying my T140 from Dell, I learned this, and requested the HBA330 Non-RAID controller, which has worked great. Unraid wants direct access to the drives without going through other stuff, so it can handle all that. Some RAID controllers may have a non-RAID firmware you can flash (I know mine is available as another model with RAID firmware, though it seems to be a pretty popular one, and the only one I currently have any info on).
  25. Seems odd intended behavior. Well makes sense at a certain scale, but when you're using 16TB (hopefully bigger in the future) drives, it is reasonable that you'd want more free space on such a big drive, to keep it healthy, compared to the cache. I believe this is all based on what your biggest file will be, but I want my 16TB to stop at say 500GB, but my biggest file will be 100GB so I should be able to set my cache to that minimum. This should consider drive health concerns and move to the next drive after hitting that minimum free space, while still allowing you to use your 400GB of cache on a 500GB drive.... In short, it should be considered that largest file size isn't the only factor when choosing free space on the storage drives, but it's the only big factor for cache, meaning cache should allow a smaller min free space. Edit: For now, I've just set my free space on my storage drives to the same as the cache, but I really don't want my 16TB drives getting that full... I'd much rather it stops using a 16TB while it still has 500GB (or a little less due to the last file) free, but currently that would leave me with no cache.