Jump to content

Kevek79

Members
  • Content Count

    69
  • Joined

  • Last visited

Community Reputation

4 Neutral

About Kevek79

  • Rank
    Advanced Member
  • Birthday 01/02/1979

Converted

  • Gender
    Male
  • Location
    Vienna/Austria
  • Personal Text
    life is short, so be nice to each other

Recent Profile Visitors

241 profile views
  1. CPU Pinning does not make a certain core unavailable to the rest of the system. Pinning just tells the given docker or vm what cores to use, but it does by no means isolate those cores from the rest of the dockers, unraid or other vm's.
  2. Cant be of any help for the Moonlight Streaming, but your issue with plex sounds confusing. So, to make sure I understand you correctly. You can reach your server via app.plex.tv (Plex Relay) but are limited to 2 mBit/s Streaming (Souds like a set limitation to 720p - which is the stanard for the web app I think) But if you try to connect via https://YourIP:32400 you do not get access. Correct? Your Plex Docker runs in host mode on the same IP that your UnRAID server is running? Your public facing IP is IPv4?
  3. Dual parity for a total of four drives is a bit overkill IMHO. Regardless of the size of the drives.
  4. Frankly I did not have time yet to look into your logs (still at work), just wanted to see if your problem is connected to an open bug report for 6.7.2. it is a known issue that reads from the array are crawling slow when a write to the array is happening at the same time. So my question on mover was to see if there is a connection to this issue.
  5. Are you using a cachedrive? if so, for which time is mover scheduled to run?
  6. If everything is running well atm and given that you are living in an area with frequent power outages I would say that you should get a decent UPS. Don’t try to learn the hard way that a UPS can save your data and/or hardware!
  7. Does the controller card have a Marvel Controller ?
  8. Welcome to the UnRaid Community SSD's are not recommended to be used in the array. But you could use on Hdd as parity one HDD as data and create a redundant cache pool out of the SSD's.
  9. Link is 4 posts above yours Damn, Squid was faster
  10. Also, some of your shares are currently not on the drives I would expect looking at the cache setting. e.g. appdate is set to "only" but files are on cache, disk 1, 2 & 4 The Share "D-------s" is set to "no" but data is only on the cache drive.
  11. Your syslog is full of error messages regarding the nvme-drive and btrfs. Might be a case for @johnnie.black
  12. Channel and slot are recorded in the last line of the error message. So you can start by parsing the syslog for those errors and see if it is always the same module. But you should maybe also run memtest from the boot loader
  13. Didn't you start this whole adventure because of a non working Marvel controller? So why replacing it with another one of those ?
  14. Can you tell us a bit more about your system / config? - Maybe there could be some help that does not involve starting from scratch! what port is your monitor connected to - onboard or gpu ? Does your MB have a management port (IPMI)? ....
  15. I'm by no means an expert, but there are a couple of those entries in your syslog Aug 21 09:53:18 Tower kernel: mce: [Hardware Error]: Machine check events logged Aug 21 09:53:18 Tower kernel: EDAC sbridge MC1: HANDLING MCE MEMORY ERROR Aug 21 09:53:18 Tower kernel: EDAC sbridge MC1: CPU 10: Machine Check Event: 0 Bank 7: cc00008000010092 Aug 21 09:53:18 Tower kernel: EDAC sbridge MC1: TSC 119cdde8e0587a Aug 21 09:53:18 Tower kernel: EDAC sbridge MC1: ADDR 1a7f729580 Aug 21 09:53:18 Tower kernel: EDAC sbridge MC1: MISC 207c3e86 Aug 21 09:53:18 Tower kernel: EDAC sbridge MC1: PROCESSOR 0:306e4 TIME 1566366798 SOCKET 1 APIC 20 Aug 21 09:53:18 Tower kernel: EDAC MC1: 2 CE memory read error on CPU_SrcID#1_Ha#0_Chan#2_DIMM#1 (channel:2 slot:1 page:0x1a7f729 offset:0x580 grain:32 syndrome:0x0 - OVERFLOW area:DRAM err_code:0001:0092 socket:1 ha:0 channel_mask:4 rank:4) Error seams to be memory related, but you will need to wait and let the real gurus look into your logs to help you figuring out what is going on here.