Jump to content

Alphahelix

Members
  • Content Count

    130
  • Joined

  • Last visited

Community Reputation

5 Neutral

About Alphahelix

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Denmark

Recent Profile Visitors

716 profile views
  1. Hi M0ngr31, I think I may have stumbled upon the root cause, after watching an episode of linus' tech tips. He had some problems with his VM and it turned out to cause by a great amount of RAM. I have 384GB (funny enough the same amount Linus had in his machine) How much do you have in yours? I have made a thread which describing my scenario, maybe you can confirm? https://forums.unraid.net/topic/82427-spontaneous-reboots/?tab=comments#comment-764418 /Alphahelix
  2. Any chance to have "huge pages" as an option for the few of us with 384GB RAM? Or any guides to manually enables this? Seems that system with a lot of RAM can benefit from this according to this video: https://youtu.be/1yFQd4MaKK0?t=654 from 09:36 Linus explains the issue.
  3. Hi all, I am experiencing spontaneous reboots all the time, and it is driving me nuts. I have had this issue for some times and have tried to find a pattern, this list is what I have observed: Shutting down any of my two VM (pfSense / Windows 10) 20min -2hours of playing a game (VM) starting a tab in chrome (VM) I have tried to pull out 2/3 of my RAM, I have tried to rotate my RAM (so the 1/3 the was not in use was mounted as a test) I have tried to pull out any non essential PCIe cards so it is now have one LSI controller and one Nvidia 1060 3GB graphics card. I have saved the syslog to the USB drive and is attached here. (note the reboot took place between line 3016 and 3017) I did not attached my other syslog as it took up more than 4GB. I also attached the newest syslog (downloaded from unRAID itself as well as diagnostics. I hope this one can shed some light on where to focus my efforts. Any help is much appreciated... /Alphahelix syslog xeon-diagnostics-20190806-1701.zip xeon-syslog-20190806-1701.zip
  4. Hi, I have a question regarding Bitwarden. I cant seem to get it to work with Chrome (I cant create a profile or login). According to this site: https://github.com/hassio-addons/addon-bitwarden#known-issues-and-limitations Known issues and limitations Some web browsers, like Chrome, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like Cannot read property 'importKey'. To solve this problem, you need to enable SSL and access the web interface using HTTPS. So how do I enable https ? otherwise I dont see it will work with chrome. /Alphahelix
  5. Share are best created in unRAID GUI, and then copy your media files. In your case I would create a share in GUI, then use krusader to move the files to the new share. It SHOULD do that without any trouble. (but test before moving it all) /Alphahelix
  6. Ok I know that ZFS is at the moment not possible to implement in unRAID due to legal issues. But... what if someone (unfortunately I lack the programming skills) was to take some of the features and make a rudimentary construct? Would that still be ok legal-wise? Let us pretend it is, and let me theorize how it could benefit us unRAIDers... Today we have individual disks for parity and data, only cache can consist of multiple disks. To my knowledge (and i do not claim to burdened with in-dept-knowledge) of ZFS uses vdev(s) that consists of on or more disks that can vary in sizes. With multiple vdevs you are able to expand the ZFS. vdev(s) uses ZFS own data protection single, mirror, RaidZ1, RaidZ2, RaidZ3 etc. where the number indicates how many parity disks in the pool(s). Now if we use a somewhat similar approach in unRAID where we bundle individual disk to a pool-of-disks (Abbreviated as POD from this point on) running in stripped-mode via "mdadm" and assigning PODs different roles like individual disks are today. In theory the benefits should be increased speed. Also we should be able to make a POD that consist of SSD's for parity-POD and a data-POD, thus get a very high transfer speed outside the cache and still have data protected in parity. For those wanting multiple arrays, this could be seen as each POD could be seen as one array. I have tried to create a visual of how disk assignment is today, and with the POD design: Today: 8tb Disk0-parity0 6tb Disk1-parity1 4tb Disk2-data0 4tb Disk4-data1 4tb Disk5-data2 4tb Disk6-data3 4tb Disk7-data4 4tb Disk8-data5 6tb Disk9-data6 6tb Disk10-data7 6tb Disk11-data8 6tb Disk12-data9 4tb Disk13-data10 4tb Disk14-data11 4tb Disk15-data12 POD-design: 4tb Disk2 POD0-parity0 4tb Disk4 POD0-parity0 4tb Disk5 POD0-parity0 4tb Disk6 POD1-parity1 4tb Disk7 POD1-parity1 4tb Disk8 POD1-parity1 6tb Disk9 POD2-data0 6tb Disk10 POD2-data0 6tb Disk11 POD3-data1 6tb Disk12 POD3-data1 4tb Disk13 POD4-data2 4tb Disk14 POD4-data2 4tb Disk15 POD4-data2 8tb Disk0 POD5-data3 6tb Disk1 POD5-data4 In the example above overall performance in the array should in theory be faster, even with unRAID's speed penalty. Further POD4 should have superior read/write speed (given that no other PODs are active at that moment). And one could argue that you now have 4 smaller arrays. This is will no doubt make the "master"-array more vulnerable to disk fails, but by using mdadm you are not limited to RAID0, but can in theory use RAID6 in one POD and RAID0 in another should there be a need for that. Also if memory serves me correctly, ZFS has no way of expanding a vdev. It is possible to expand a POD using mdadm. (with limitations in RAID10) When using mdadm it is also possible for RAID6 (most likely also RAID5) to add a hot spare disk for minimal "rebuild-time" if needed. So data residing in a POD with RAID6 will have "internal POD protection" in form of RAID6 but also be protected by the parity POD(s). Data residing in a POD with RAID0 will only have the protection of the parity POD(s), and thus be more vulnerable than a setup where you have 1 disk for parity and 1 disk for data. The reason is if ONE disk in a RAID0 array breaks down ALL data is lost. So with the POD design using RAID0 you are more vulnerable than if you have 1 disk for parity and 1 disk for data. So I think a 1 disk for parity and 1 disk for data solution should still be possible, at least as having a POD with only 1 disk in it. Now I know some may think this is a bad idea, but I am sure some think it can be useful. I think that if Limetech chooses to implement this it should be for PLUS & PRO licenses only as 6 disks and below will get little benefit from this. I have tried to list pros and cons for this POD design: PRO: 1a) if theory meets practice we should see a read speed bump for each disk added to a POD 1b) if theory meets practice we should see a write speed bump for each(* 2 for RAID10) disk added to a RAID0/RAID10*/RAID5/RAID6 POD 2) possibility to "kind of" having multiple arrays 3) possibility to have hot spares for selected PODs 4) selective elevated data protection for each POD CON: 1) more vulnerable to data loss when RAID0 is used in POD 2) less available space from disks where anything else but RAID0 is used in POD 3) CPU usage may be higher due to the extra "layer" of RAID This suggestion is to give inspiration I have only used what I know/heard of. I know it is a bit far out. But at least no one got hurt. /Alphahelix
  7. if you prefer a GUI, my recommendation is Binhex' krusader. But TQ's method will work great too.
  8. Without having tested it, I think one way would be to use pfSense firewall (or similar) and make a rule (in & out) for traffic for that specific docker. I did find a youtube video for setting up VPN (in the example it is PIA) in pfSense. I hope it can help you, if not then at least give some inspiration.
  9. Following here, as I have the same issue.
  10. could it be (it is just a thought) that editing the XML file, under the NIC setting, to vmxnet3 ( ) will do the trick? It made my pfSense go from 1gbit to 10gbit on the virtual interface. /Alphehelix
  11. it is the memory controller (if I am not mistaken, located in the CPU) that determine the max amount of RAM. the 192GB max is due to the memory controller limitation. I am sure if you put in another pair of CPU's it would be a higher number. The max memory from the motherboard manufacturer is most likely the max the manufacturer hos testet.
  12. Hi Răzvan Zeceș as @trurl informed you it is only root, no one else that have access to webui. The same goes for SSH.
  13. Yes. I use a Corsair HX1200, I bought it specifically to get native 2CPU support. I will try that, however I cant fit both 2 slot GPUs the LSI (used for cache) and another card, as it will be blocked by ine of the GPUs.
  14. I feel your pain. But congratulations on your 3 new 10TB drives. I am only VERY envious.
  15. Dear all, I need help with my unstable server. For many months I have tried to find the root cause to my problem(s). I feel that I have tried everything, but since I still have the problems I obviously have not. Let me list the symptoms: 1) At boot unRAID can see all PCIe cards inserted, but after array starts (I have not experienced it before array starts, even when I have disabled autostart) after a few minutes it suddenly drops 3 of the PCIe cards. I have tried with two NVME, a LSI RAID card and a Chelsio 10GBIT NIC. (All testet on another system no problems). If I remove all other cards it seems to be working (though I cannot be sure because removing the other cards cripples the system). 2) If I remove the "excess" PCIe cards the system "runs" for some hours I have created a grafan graph here I can see I have had restarts 17/19/23/23 and system halts the 21/25/27. With restarts the server unprovoked just reboots. With system halts the server halt, 100% unresponsive to anything SSH, HTTP or lokal input (ACPI power button). Only a reset button or power button (on PSU) works. I do not have keyboard on the unRAID as I have routed my 1 of 2 usb controllers to a VM (the other has a single port is used for unRAID USN stick). 3) I have tried to shutdown my VM to see if that is the culprit, and it does seem to have an effect on stability, though I can not make it conclusive. Also a virtual PC affecting hardware in such degree is in my mind not very likely (but not impossible). When it works, all seems fine. I have tried to use different PCIe slots on the motherboard (Supermicro X9DRI-LN4+) all is working except when described above. I have tried to reseed the CPU's, I have tried to check the RAM. I know you want my diagnostics, however, with every reboot it will be lost right? (I am not 100% sure if that is the case). But for good measure I have attached it anyways. At the time of the diagnostic download the system have been running a little over an hour. Any ideas? Know of any tools to test the system (CPU/chipset/RAM/PCIe-bus) (I know there is a memtest included in the unRAID boot menu) Kind regards Alphahelix xeon-diagnostics-20190527-0535.zip