Robot

Members
  • Posts

    36
  • Joined

  • Last visited

Posts posted by Robot

  1. On 4/27/2022 at 10:26 AM, JorgeB said:

    Not a Mac user but there was an issue with OSX forcing sync writes with Samba, not sure if current releases are still affected but worth trying:

     

     

     

    Mmm... interesting indeed!! I do have an M1 Mac Mini running Big Sur.

    I'll have to try it out when the issue happens again regularly. It's so random that sometimes it'll happen just half the times.

     

    As soon as the issue seems persistent copying some files over, I'll try this.

  2. Hi @jonp! Thanks for replying!

     

    9 hours ago, jonp said:

    SHFS Overhead

    [...]
    The downside is that there is a bit of overhead, but I'll be honest, you're getting a fraction of the overhead I did years ago when I experimented with SMB and 10gbps connections for a LinusTechTips video. 
    [...]

     

    I see, and I understand. But yes, as you say, in the end I'm getting pretty good performance, at least on Mac. I hope to also equip my Windows machine with 10Gbe at some point, I hope the overhead loss is as low as with the Mac!

    But as you said too, I was expecting to push it even harder :)

     

    9 hours ago, jonp said:

    Writing directly to Cache Disks

    And once again you are correct in that writing directly to the cache (bypassing SHFS entirely) will deliver full performance as expected.  This is because SHFS doesn't have to step into every transaction and manage it.

     

    Yeah... although it's not worth the compromises to be honest :/

     

    9 hours ago, jonp said:

    Weird halt times at the end of copying files

    This is news to me, but I have asked the team to look into it and attempt to reproduce it.  If there is a bug, we'll have to reproduce it to figure out how to solve it.


    This has happened when copying over laaaarge files, I'm speaking >20GB at least. And I'd swear it happened after having copied several files.

    I noticed this for the first time when I was doing my first "speed tests" in the 10Gbe connection. I selected a random 50GB video export and copied it over. I deleted it using Krusader and copied again. And again. Some times to different folders and/or shares. Just to check speed. In between copies I might  do something else.

     

    I don't know how to exactly reproduce it. I'm copying a 51GB file right now just to test, and speeds are all over the place. They jump between 300 and 900MB/s constantly.

    I did the copy again, although through terminal so I could precisely time it:

    File is 51GB according to <du -h>, but macos reports 54,88GB.

    --> It took 1 minute 58 seconds, which is around 450MB/s depending on what size you choose as correct.

     

    9 hours ago, jonp said:

    Terribly slow transfers of certain files/folders

    Please expand on test examples here.  Size of the file, length of the transfer in time, etc.  This isn't something I've witnessed first hand.

     

    I've encountered this issue some times, can't really tell what scenarios exactly, except for one:

    If I run a backup in Carbon Copy Cloner from my main disc to a specific folder in my UNRAID server, it takes several hours. Instead, if doing it to an external NVME plugged in via Tunderbolt, it takes around 2 minutes.

    I know 10Gbe maxes out at like 1/3 the speed of the external NVME, but even 2 minutes *10 would be just 20 minutes, not 3-5 hours :S
    I abandoned the idea of backup up to my UNRAID and I'm doing everything to this external NVME.

    There are more examples... but I can't be precise. The other day I was copying over some files from my server, such as backups made by CA Restore/Backup (usb, vms, appdata), and it was doing the transfer at 5-10MB/s for some reason. I tried copying some of those files now manually, but it hits 150MB/s and then just finishes.

     

    9 hours ago, jonp said:

    Longer Term Solution

    At some point, maybe we allow you to create a primary pool of storage that isn't dependent on SHFS?  That'd be neat, eh?  ;-)

     

    Well that would be awesome!! Is it a new feature coming in next release?? 😮
     

    Thank you!

  3. Hello again,

    After looking into this a bit more, I found out several things:

     

    SHFS Overhead

    When working with shares, UNRAID seems to add to SMB transfers an SHFS Overhead (or something like that, didn't quite understand what it is).

     

    I'm not sure what this overhead is used for, but makes a performance hit. Apparently, how bad this hit is, depends on the system itself.

    In my case, iperf3 shows a connection of 9.41Gbps, which should translate to over 1.1GB/s, but instead I get around 890 - 920MB/s on average.

     

    It's not super bad, but could be better. I guess I can thank my performance impact is around just 15-20%.

     

    IMPORTANT NOTE: This only affects writing to UNRAID shares. Reading from them does max out the connection. (As long as the disks allow for it).

     

    Writing directly to Cache Disks

    Following up with the SHFS Overhead, it turns out that you can indeed get 100% network performance, as long as you accept a lot of compromises.

    This is achieved by enabling showing all disks shares, and then mounting your Cache Drives.

     

    When transferring files to my cache files, I do indeed get over 1.1GB/s transfer speeds, both writing and reading.

    Just to clarify, I did NOT accept those compromises. I just enabled the cache share to test out performance.

     

    Weird halt times at the end of copying files

    With all these tests, I realized that when copying large files (several GB video files) to my UNRAID Server, once the copy gets to the end, it halts for around 10-20s. Then finishes.

    I was copying a 65GB file to my server, and it took around 1min 12sec to get to the end of the progress bar, than it halted there for 20s, then finished.

     

    Not sure why this happens, but it's consistent. Haven't tried through Windows, I don't have a Windows PC with 10Gbe connection, not yet at least.

     

    Terribly slow transfers of certain files/folders

    When copying certain files over the 10Gbe connection, transfer speeds are terribly slow.

    For instance, transferring an appdata backup in a .tar file, goes at around 5 - 10MB/s.

    I guess I'll just have to live with all these, for now at least.

    I wonder if all these issues are caused by the SHFS Overhead.

  4. Hi! Thanks for the response!

     

    I was getting excited reading the post, but then I realized that it doesn't really apply to my case, sadly.

    I'm already getting the speeds the OP of that threads finally manages to get (a little better, actually).


    I ran an iperf3 yesterday, and I got 8.31Gbps, which honestly seemed a little off compared to what you see on the forums.

    All my shares are set to public, since only I have access to my network.
    BUT, I decided to try out a private share with a user, and running an iperf3 today got me 9.41Gbps...

     

    I restarted my Mac and connected again just as a guest, and I'm still getting 9.41Gbps, not sure what happened yesterday.

     

    That being said, tough, transfer speeds are still the same. Around 900+ MB/s.

    Again, it is not bad, but I expect at least 1.1GB/s from a 9.41Gbps connection.

    If anything has any idea what is going on, I would appreciate!
    (Can't really try out from a Windows machine as of now)

  5. Hi! I work with 4K video (extensive editing and a lot of exports), and I've always wanted to switch to 10Gbe.
    I finally did it! Yay!! I purchased:

    - ASUS XG-C100C (For the unRAID server)
    - Netgear XS508M-100EUS (as a Switch)
    - My working machine is a Mac Mini M1 (which has 10Gbe built in). (Planning on getting a Mac Studio at some point).

    Now, my speeds aren't BAD, but I was expecting more. Any ideas what might be going on here?

     

    While copying a single 50GB video export, I'm getting speeds around 900MB/s. See image here.
    It is saying 8,1Gbps, which should be 1012MB/s, but Mac OS reports between 880 and 920MB/s. Also checks out while timing the transfer.

    When running a Blackmagic Speedtest, I'm getting numbers around 800MB/s Write and 1000MB/s read. See here.


    Considering 1Gbe is pretty efficient and I was getting regularly ~115MB/s (920Mbps), which is just a 8% loss, I was expecting a maximum of 10% less speed, that is 9Gbps, just a little over 1.1GB/s.

    IMPORTANT: This is NOT a disk limitation. My cache disk is a Corsair MP400 4TB drive, which gives speeds over 3GB/s both write and read.

     

    Thank you!!

  6. Hi! I did a Clear Disk + Post-read verification of two 2TB drives.

    I was using the old plugin, and for some reason after the zeroing (success apparently), both drives failed and then disappeared from the system. I had to reboot in order for them to show up again.

    I don't know what happened, but since finding out about this new version due to "issues" in the old plugin, I assume it was something like that.

     

    Now, I'm assuming my disks are zeroed, so I'm running a "Verify Disk" on them.

    Is that the same as the "post read verification"?

    Thanks!

  7. I ran dmesg and the last three lines are as follows:

     

    CIFS: VFS: Autodisabling the use of server inode numbers on \\192.168.1.114\Share
    CIFS: VFS: The server doesn't seem to support them properly or the files might be on different servers (DFS)
    CIFS: VFS: Hardlinks will not be recognized on this mount. Consider mounting with the "noserverino" option to silence this message.

     

    All red, denoting ERROR level I assume.

    I hope it helps diagnosing the problem :/

  8. Hi! I have a VM with one UNRAID Share mounted to it. The VM is just running scripts in the background using crontab.

    I mounted the share with fstab, as follows:

     

    //ServerIP/Share_name  /media/Folder_I_Created  cifs  guest,uid=1000,iocharset=utf8  0  0

     

    (Server has fixed IP)

    I sometimes connect to my VM via VNC and see the scripts stopped doing their tasks. Upon inspection, I always find that the share is unaccessible.

     

    Navigating to it through terminal gives no results, as if it wasn't mounted. Opening my home folder via de UI and clicking on the share icon (it still shows) gives an error (not accessible). There's the "Eject" button, but it does nothing.

     

    Using "sudo mount -a" in terminal does nothing, I usually have to manually restart the VM in order to get it mounted again.

    Any ideas of what might be happening?
    I recently switched to a new UNRAID server, I configured this VM exactly as my old one in my old server, where I never had this issue.

     

    Thank you!!

  9. 6 minutes ago, JorgeB said:

    Yes, then format after array start.

     

    Nice, thanks!

     

    5 minutes ago, ChatNoir said:

    Unless your drives are empty and/or you don't care about their content, you should move the data to the array before changing the File System and back to the pools once the format is finished.

     

    Nah it's OK, this server is new. I'm still using the old one since I need it 24/7 for work. Once everything is set up, I'll move data across.

     

    Thank you both!

    • Like 1
  10. 3 minutes ago, JorgeB said:

    For single device pools and If you don't care about the extra btrfs features, like snapshots or checksums, you should use xfs.

     

    I see. I'll have to dig into both those options do decide whether or not it's worth for me.

     

    Now, in case I want to go XFS, what should I do? Stop array, go into the cache settings, and just set it to XFS? And then install the TRIM plugin, of course.

     

    Thank you.

  11. Hi all!
     

    I have doubts with the filesystem for my pools. I'm going to have three pools, as follows:

    - 1x NVME 4TB for data cache

    - 1x NVME 1TB for Dockers and VMs

    - 1x SSD 2TB for various stuff VM related

     

    I created the pools and the system automatically gave them the btrfs filesystem. Afterwards though, I see you can change it to XFS.

     

    I understand btrfs is the only option when RAID0/1, but what about one single drive caches?

    + I've read people recommending btrfs, because it needs no TRIM.

    + I've read others recommending XFS, since it's more robust apparently and you can TRIM with a plugin anyways.

     

    Any thoughts?

    Thank you!

  12. Thank you both!

    Something else that just came to mind: I guess I should set the shares to not use the cache pool until all data is copied over, right?

     

    32 minutes ago, JorgeB said:

    Yes, if you want parity to remain valid make sure you mount them read-only, it's an option with UD.

     

    You can also disable parity in the new server for faster simultaneous multiple disk copy.

     

    Thank you!

     

    32 minutes ago, JonathanM said:

    Yes.

     

    However, unless you manually mount the drives read only, parity will need to be corrected when you put the drives back. Also, I doubt you would see a huge speed difference, and since you won't be sitting in front of the server waiting for it to finish, the difference between 45 hours and 40 hours isn't going to be meaningful.

     

    In my opinion the marginal gains aren't enough to justify all the risk of physically moving the drives around.

     

    Interesting. I thought the speed would be around 150 - 200MB/s, which would be like 50% faster (assuming 175MB/s vs 115MB/s). This would've meant 20h instead of 40+, and since I need the server for work stuff... Such a shame.

     

    I will then do it over Ethernet.

  13. Hey all!

    I am going to fully upgrade my server, all hardware will be new, it will actually be a new computer altogether.

     

    Current server has 7x WD Red 4TB drives (2 parity + 5 data).

    New server will have 4x Seagate IronWolf 14TB drives (1 parity + 3 data).

     

    Now, I want to fully move all data from current to new server. Obvious way is connecting them to the same router and start copying. 1Gbe so 115MB/s. It'd take around 45h considering how much data I have.

     

    Question: Can I take out of the current server the data disks, plug them into the new server as unassigned devices, copy data from them, and put them back into the current server?

     

    Thanks!

  14. On 8/30/2021 at 4:12 PM, trurl said:

    That's not so old that it should cause any problems updating to latest version. Some new features in 6.9+, especially multiple pools to better take advantage of faster (SSD) storage not in the parity array.

     

    That's good to hear. Yes, the new multiple cache pool feature will be super handy for me :)

     

    On 8/30/2021 at 4:12 PM, trurl said:

    Your current drive count maybe doesn't really justify dual parity, but you are already talking about going for more drives.

     

    Personally, I prefer small form factor, so when I need more capacity I upsize disks instead of add disks. Each additional disk requires more hardware to support it, and maybe more importantly, each additional disk is an additional point of failure.

     

    Yes, indeed. After investigating a bit with my little free time, I've come to the conclusion that it will probably be for the best if I upgrade the disks themselves.

    My Node 804 is nice, but I need to go full ATX. I need two graphics cards for two VMs, and also I plan on adding 10Gbe connection, so another PCI-E port needed there.

    My current option is going for a Fractal Design Define 7. I prefer this one over the Meshify 2 because of noise, the Define 7 has a lot better noise reduction. Yes, it also has lower thermal performance, but being honest that's not the main goal of an UNRAID server.

     

    I'll probably buy 4x Seagate Ironworf 14TB drives and leave one as parity, for a total of 42TB of usable storage capacity.

     

    The case has space for like 14-16 drives (if you find stock of the accessory that is), so I'll be covered if in the future I want to add more 14TB drives.

    I've thought about adding all my 4TB drives as well, but then that would be 11 drives with just one as parity... (one of the 14TB ones). Doesn't seem very ideal...

     

    On 8/30/2021 at 4:12 PM, trurl said:

    The disadvantage of that approach is you may have to retire (or repurpose) smaller disks, and you don't get as much extra space upsizing a disk as you would if you had simply added that new disk to a new slot. And, of course, larger disks take longer to rebuild or parity check.

     

    Hand't thought about that... I hadn't had to rebuild any disk in these two years, but it might very well happen in the future.

    Now considering again 4TB disks... haha

     

    On 8/30/2021 at 4:12 PM, trurl said:

    My repurposed drives go in my backup server. I don't bother to backup some things so don't need as much capacity there. Or I repurpose them for offsite backup of the really important stuff.

     

    I've considered doing this as well, having an offsite server somewhere (parent's house for instance), but I'm afraid it might be a hole which attackers might use to get into my data.

     

    (I'm basing this fear in the fact that an accessible server is a weaker point than none)

     

     

    Thank you very much for your help!

  15. Hi all!


    First of all I'd like to apologize my English. Although good, it is not native, so something might get lost in translation.

    Also... long post. SORRY

     

    I've had an UNRAID server for around 2 years now, and I LOVE IT.

    That being said though, I haven't maintained it very well. OS hasn't been updated in ages, I barely have done any parity checks, etc.

     

    I recently started to run out of space, and also I had the chance to get a 3900X to upgrade the 1700X the server is running. So I decided it was time to upgrade the server, and also (finally) update everything and configure it properly.

     

    SERVER CAPACITY

    I have 7x Western Digital Red 4TB drives. I'm using 2x Parity Disks and 5x Data disks. I'm thinking I might've overkilled it and that I could take one parity disk out and make it data disk.

     

    • Is this a good idea? Almost every UNRAID system I see has just 1 parity disk.
    • In case I go for it, what should I do? I'm guessing just stop the array, remove it from parity, and add it as data disk 6?
    • Any things I should do prior to that? (Like a parity check of something?)

     

    Also I'd like to add a couple more disks. My 804 has space for 10 drives in theory, 2x 4 drives cages and the possibility to install 2 more drives at the bottom of the case, near the graphics card.

     

    I don't know why I only installed 7 drives, but I should be able to add one more drive there. I could also take a look at adding those 2 bottom drives, although it apparently needs rubber feet and honestly I don't know where I might've put those... maybe amazon has some generic ones?

     

    Another possibility is upgrading to a Define 7, but it only fits 6 drives by default and the expansion trays are impossible to find anywhere it seems.

     

    Honestly I don't feel like going the rack server route... specially noise wise. Also it would be a much more expensive upgrade, considering I don't have a rack cabinet or anything else for that matter.

     

    HARD DRIVES

    As I said, my server is rocking WD Red drives. I recently learned about the SMR/CMR issue with them, but I don't know how to check if my drives are one way or the other.

     

    • Should I go for Seagate IronWolf this time?
    • Any advantaged for going 8TB or more? 4Tb drives have the best $/TB ratio there is, both WD Red and Seagate IW.
    • If I went for higher capacity disks, I should get at least two, right? (Because the parity disk needs to be the highest capacity if I'm not mistaken?)

     

    TWEAKS AND "stuff"

    When I created my server, I remember Ryzen wasn't all that widely supported, and I remember doing several tweaks to the system. Adding a Trim tweak (although I have a TRIM plugin installed, that might be it). I also added some stuff to a terminal file, some config file, like disabling c6 states and something else.

     

    • What should I do with all these tweaks? Specially since I don't remember exactly what I did.
    • I even considered building the server from scratch, just to write everything down... Although this is probably a very bad idea.

     

    I probably have a lot more questions, but my head is a mess right now after days of looking around, and also I don't want to make the post even longer than this.

     

     

    Thank you very much to anyone who read the post (or part of it).

  16. Hi! I didn't want to respond yet, since I've been trying out the native LG C9 Plex app during this last week. It works. It direct plays everything I've tried, from HD to 4K and also HDR.

     

    The problem I've encountered is that it sometimes stops the playback to buffer. It happened a lot with The Lion King 4K HDR for instance. It could play the movie perfectly for 35 minutes and then pause to buffer like 4 times during a 2 minute period. I've read it might be related to subtitles (I turned all of them off) or True HD audio tracks I think (I tried both DTS and AC3).

     

    The TV is wired connected to a router acting as a repeater. I've ran speedtests on a Mac Mini which is also connected to this same router and I get around 250/300 DL/UL speeds (main router has 600/600), so I assume the connection to the main router (the one unRAID is connected to) is perfectly fine.

    For some reason this TV doens't have a 1 gigabit ethernet port, just 100Mbps. It should be plenty enough though, specially since the Lion King Movie is "just" 54Mbps for 4K HDR.

     

    My deduction then is that it's related to the native Plex app on the TV?

    I might go for a Shield, I won't accept having pauses during a movie to buffer, I just liked the idea of using LG's remote for everything. The shield will add another remote and although not really a problem, it's an inconvenience. On the other hand I'd get a better 1080p scaling with the Shield as far as I know, so that's nice.

     

    Thanks!

     

    P.S: In case I buy the Shield and buffer still happens I'll update thread.

  17. On 1/3/2020 at 7:31 PM, Xaero said:

    What TV did you get? A lot of the SmartTVs on the market have Plex available for them natively now.

     

    Hi! I got an LG C9, but the Plex app is... well, I'm not really a fan. After searching online I found that it's not one of the best clients and has potential problems with HDR and 4K, which is a bummer since that's what I intend to watch the most :/

     

    Thanks!

  18. Hi! I hope this is the right place to post this thread :)

    I use unRAID for my job, but half a year ago I decided to start moving my blurays to a new share and installed an official Plex docker. This christmas we got a new TV (4K, HDR, blah blah) and ditched all my old hardware.

     

    Now, I've been reading that the NVidia Shield is the best hardware there is for this purpose, but I've also read it has some problems with Plex? All threads I find are from ~2017, so I'm not sure what the current state is.

     

    Any advice on this? Is the NVidia Shield (non pro) gonna be a good match for this?

    Will I be forced to use Kodi instead? Or buy a different box?

    Thank you!

  19. On 9/12/2019 at 3:28 PM, Taddeusz said:

    Do you have IOMMU enabled in your BIOS? I was getting similar issues with my NVME cache drive. I added "iommu=pt" to the append statement in my syslinux configuration and it fixed that. Basically what this statement does is enables IOMMU only for the devices you want to pass through to a VM.

     

    I have it enabled, yes. Although I didn't enable it, it seems it is enabled by default when loading BIOS defaults.

    What is the "syslinux configuration",  and what is the "append statement"? Only thing I manually edited so far is the go file in the flashdrive share, in order to disable c-states of my Ryzen.

     

    Thanks!

  20. Hi! I built my new unRAID server like a week ago, and after the parity sync and all I finally started using it on September the 7th.

    Since I have a Raid1 cache pool of two M.2 1TB drives, I installed the Trim Plugin as suggested by "Fix Common Problems".

     

    I got it scheduled to run every night at 5:30, but I get these errors in one of the M.2 drives everytime the trim plugin starts.

     

    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2144
    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 10304
    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 43072
    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2162240
    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 46704704
    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 48277568
    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 50374720
    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 52471872
    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 54569024
    Sep 8 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 56666176
    Sep 8 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 25 block group(s), last error -5
    Sep 8 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 1 device(s), last error -5
    Sep 8 23:18:51 unRAID emhttpd: shcmd (3681): /usr/sbin/hdparm -y /dev/nvme0n1
    Sep 8 23:18:51 unRAID root: /dev/nvme0n1:
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2112
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 10304
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 43072
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2141400
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60157808
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60860480
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 62957632
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 65475464
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 67247928
    Sep 9 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 70005864
    Sep 9 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 281 block group(s), last error -5
    Sep 9 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 1 device(s), last error -5
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2144
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 10304
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 43072
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2180088
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 28342784
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 57390288
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60157808
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60860480
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 62957632
    Sep 10 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 65475464
    Sep 10 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 233 block group(s), last error -5
    Sep 10 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 1 device(s), last error -5
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 2144
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 10304
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 43072
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 28342208
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 57390288
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60157808
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 60969520
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 62957632
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 65475464
    Sep 11 05:30:01 unRAID kernel: print_req_error: I/O error, dev nvme0n1, sector 67247416
    Sep 11 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 169 block group(s), last error -5
    Sep 11 05:30:01 unRAID kernel: BTRFS warning (device nvme0n1p1): failed to trim 1 device(s), last error -5

     

    It seems like some sectors are bad? It's not always the same ones... Also it doesn't seem to be increasing day by day?

    Should I be worried? This M.2 drive is brand new.

     

    By the way, this particular M.2 is NVME; the other one on the pool is M.2 SATA, since my mobo didn't support two NVME disks.

     

    Thanks!