Jump to content

Robot

Members
  • Content Count

    13
  • Joined

  • Last visited

Everything posted by Robot

  1. I have it enabled, yes. Although I didn't enable it, it seems it is enabled by default when loading BIOS defaults. What is the "syslinux configuration", and what is the "append statement"? Only thing I manually edited so far is the go file in the flashdrive share, in order to disable c-states of my Ryzen. Thanks!
  2. It's a Sabrent 1TB M.2 NVME. This one I think.
  3. Hi! I built my new unRAID server like a week ago, and after the parity sync and all I finally started using it on September the 7th. Since I have a Raid1 cache pool of two M.2 1TB drives, I installed the Trim Plugin as suggested by "Fix Common Problems". I got it scheduled to run every night at 5:30, but I get these errors in one of the M.2 drives everytime the trim plugin starts. Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2144 Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 10304 Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 43072 Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2162240 Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 46704704 Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 48277568 Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 50374720 Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 52471872 Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 54569024 Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 56666176 Sep 8 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 25 block group(s), last error -5 Sep 8 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 1 device(s), last error -5 Sep 8 23:18:51 unRAID emhttpd: shcmd (3681): /usr/sbin/hdparm -y /dev/nvme0n1 Sep 8 23:18:51 unRAID root: /dev/nvme0n1: Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2112 Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 10304 Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 43072 Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2141400 Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60157808 Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60860480 Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 62957632 Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 65475464 Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 67247928 Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 70005864 Sep 9 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 281 block group(s), last error -5 Sep 9 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 1 device(s), last error -5 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2144 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 10304 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 43072 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2180088 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 28342784 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 57390288 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60157808 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60860480 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 62957632 Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 65475464 Sep 10 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 233 block group(s), last error -5 Sep 10 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 1 device(s), last error -5 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2144 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 10304 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 43072 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 28342208 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 57390288 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60157808 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60969520 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 62957632 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 65475464 Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 67247416 Sep 11 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 169 block group(s), last error -5 Sep 11 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 1 device(s), last error -5 It seems like some sectors are bad? It's not always the same ones... Also it doesn't seem to be increasing day by day? Should I be worried? This M.2 drive is brand new. By the way, this particular M.2 is NVME; the other one on the pool is M.2 SATA, since my mobo didn't support two NVME disks. Thanks!
  4. Thanks for the clarifications! @jonathanm
  5. To be honest I don't know how to check if my drives are one way or the other. Cables do work, it was easy plugging them and I did hear the "click". I also could unplug them pretty easily. I'm assuming they do work as intended. In that last case I assume then that unRAID would mark the disk as failure? Asking me to replace it, right? You mean that if I write an "original" which is already corrupted, then unRAID will just see it correct, meaning that it's exactly what I wrote initially, right? Mmm OK, this I need to be clear of. So my idea of how to do it is a NO. If I understand you correctly, steps would be: 1. Write down serial number of failing disk. 2. Disable auto-start of array. 3. Clean shut down of system. 4. Unplug one drive (the one I think has failed) 5. Turn on the system and see if the failing disk is indeed the one I unplugged. If so, replace disk. 6. If it is not, turn off again, plug that one back in and unplug another one. 7. Same as 5. 8. Repeat until failing disk is the one I unplugged and replace. Is that correct? Actually I could use that same method just to label all disks with their serial prior to any failure, right? Since I didn't do it properly upon building the system... Thank you very much!
  6. According to manufacturer, all 4 SATA lanes should have a ~500MB/s limit each. It's not in the side, it's only in the "front" of the disk, thus I can't really see them except for two (the ones which don't have another disk covering it). I'd need to unscrew all disks in order to see the other five disks' serial number. That's why I say I'll do it during my next maintenance. Thanks!
  7. Squid you say that read errors are corrected without me even knowing but you Benson say that system might not even know there's an error? These replies seem a little contradictory, don't they? Or maybe you guys are talking about different stuff and I just assumed it's the same? Yeah, I read about that. I bought "clicky" SATA cables, I hope they stay in place. Oh man... I didn't consider labeling them using serial numbers... I did label them with "Disk 1", "Disk 2", etc. But since I assigned them using sdx label... I guess they might all be mixed up. Plus, it's not a hot-swappable case so in order to see their serial number I must completely remove them and install them back in. I guess this'll be a pending job for the next system maintenance. I plan on installing Noctua's noise reduction adaptors for all fans, so I'll probably do it then. Will wait one month or so and do a parity check after it (since I'll be moving the system and all). Ok, then I guess it could work. unplug/plug one by one until the faulty disk is missing. They are indeed 4TB Western Digital Red, not Pro, 5400rpm afaik. The arrangement is simple, both cache drives are M.2 so they are where they belong (second M.2 slot disables SATA3 on the motherboard). Then the REDs, 3 of them are plugged into the motherboard and 4 of them to a PCI x1 controller with 4 SATA connections. I wanted to set the parity drives to be SATA1 and 2 on the motherboard, but since I used sdx... I guess they could be any. Should they be doing more than ~100MB/s with dual parity? Looking at this benchmark from the wiki, second to last from the parity check table has 6x REDs and he's getting 105MB/s. If they shoud indeed be faster, I really don't know how to check or what tuning I can made... help? Thank you very much guys!! EDIT: As for the last thing, drives speed, I created a test share disabling cache use, and write/read speeds saturate my gigabit network, so they are indeed faster than 101MB/s. Is it normal then for the parity sync to be slower?
  8. Hi all! I wanted to build a NAS/Server for a long time, and a couple of months ago I came across unRAID which seemed like the best option. For the past month I've been experimenting with a very simple (and not ideal by any means) setup, with just one 512GB SSD as cache and one 2TB WD Blue drive as disk1. I liked it, so I bought seven 4TB WD Red and a couple of NVME 1TB drives to work as a cache pool (raid1), built it yesterday and left it overnight to do the parity sync. Today everything is OK, I'm configuring my VMs, etc. BUT!! I got some basic questions I didn't manage to solve reading the FAQs, maybe the info isn't there or maybe I'm just blind. Sorry if it's the latter. 1. I understand how parity works. but I don't understand how errors are treated. Let's assume one file has an error in one drive for whatever reason. Will the parity drive rebuild as soon as the error happens? Or will it wait for the parity check to even realize there's an error and then fix it? 2. I'm running dual parity, which for what I've read is usually most needed if a drive fails during rebuild of another drive. My question is, is it normal for drives to fail during rebuild of other drives? Wouldn't this imply parity drives are more prone to fail? 3. During the building of the server, I plugged once drive at a time to see which was which in order to label them. Motherboard connections were pretty easy, since SATA1 was sdb, SATA2 was sbc, etc. The problem is that I installed half the drives in a PCI-E Sata card, and I tried one port at a time but unRAID always assigned sdf for the new drive, so I ended up adding all of them. If a drive fails down the road, how will I know which physical drive is it? 4. Continuing last question (3), let's assume DISK6 fails. I buy a new drive, turn off the system, disconnect Drive 6 and boot up again (without the new drive, just to see if I'm working on the correct one). What if it wasn't the failing drive? What if I now see DISK6 FAIL and DISK4 missing? Can I just plug back in Drive 4 and try the next one until I know for a fact it it the failing one? And then replace it with the new one bought? 5. I'm using two NVME drives in a RAID1 config as cache pool, so when writing to a share my W/R speeds are capped by my gigabit connection, I assume though that I'd get to 1GB/s if I had a 10gbe network. That being said, during the parity-sync process I saw it did it at an average of 101MB/s, that's because it does it directly on the WD REDs. Would this speed be higher if I didn't have parity? (This one is just out of curiosity). Thank you very much to anyone who read all this and specially to those who -I hope- will take some time to answer
  9. Hi again! I solved this thanks to a friend of mine knows his way around Linux waaay better than me. In order for it to work properly, code must be: //IP/ShareName /media/ShareName cifs guest,uid=1000,iocharset=utf8 0 0 Thought of posting in case anyone ever has the same issue. Cheers!
  10. So I'm running an Ubuntu VM with access to one of my unRAID's shares, via fstab as follows: ShareName /mnt/ShareName 9p trans=virtio,version=9p2000.L,_netdev,rw 0 0 Share mounts OK, I can see all files etc, but if I create or copy a new file, it won't show. It's like if the accessible data is just the one that existed when it was mounted. My want this VM to run rclone now and then for different scenarios, but if the VM doesn't see new files... it'll never do anything. Any help? Thank you!
  11. Hi. Thanks for the responses, it's good to know! I don't need dedicated GPU, I just need the system to perform some work on the files. For instance I need to run ffmpeg on some of the files prior to handling them to clients. Example scenario: I leave my laptop copying something to the unRAID server, when done, I go to sleep. Then at night, the VM would run a script checking if there are new files in X folder, run some scripts on them and put the results in another share, which is synced with a Google Drive for instance. The next morning I'd just need to send the link of the files to the client. That's my idea anyways, I guess this can be easily done :)
  12. tl:dr Is it possible to run UnRaid Headless (yes) but also with a Linux VM also headless? (accessible through screen sharing on Mac for instance). A VM that runs scripts periodically on the shares it has access to. ----------------------------------------------------------------------------- Hi! I'm a new user, just registered to see if you guys can shed some light into my doubts. I own three PCs, one personal and the other two are for work (I'm self employed). I work with very large files, some might be just 1GB, but others can go up to 100GB. For now I've been working with external drives and network sharing between PCs, but recently I started to need to backup some stuff and external drives just don't cut it. Some weeks ago I renewed one of my systems, so I thought of putting the old one to use with UnRaid. Specs: Ryzen 7 1700, AsRock AB350m Pro4, 16GB GSkill memory, one SSD and one 2TB mechanical drive (WD blue). I downloaded UnRaid, activated my trial period, and started using it. I only have one drive yes, but this is just a test to see if I can fit it in my workflow, and mainly, if it's stable for my needs. If I end up deciding I want to keep it, I'll buy 4 WD Red and set it up properly. For now I don't store anything critical in it. It's been working great these last days, stable and always showing up (I had to add a line of code to the go file, regarding c-states, but everything OK after that). So now I'm wondering if I could have this PC to do more stuff. I'd like to know if I can run an Ubuntu VM (I know that can be done), but completely headless. Just accessing it through screen sharing using one of my other computers in case I need to do something with it. The idea is to leave it be 24/7 doing it's work. The work I want this VM to do is run scripts. Scripts which check if certain folders have files, in which case the script will call other scripts to perform actions on these files depending on what they are. Not gonna get into specifics here since this isn't the question. So... could this be done? Could I have the UnRaid Server + VM completely headless and just access it whenever I need to -for instance- add a new script for a new project? Thank you very much!