Jump to content

takkkkkkk

Members
  • Posts

    241
  • Joined

  • Last visited

Posts posted by takkkkkkk

  1. 20 hours ago, Squid said:

    There's a current issue with 6.6.5 where if a cron job is scheduled to happen on a Sunday, then it will wind up happening every day  Switch it to be a Saturday instead

     

    Far,  far easier to post diagnostics for us to look at.

    1

    thanks!!! that seems to have fixed it, the parity check didn't run yesterday!

     

    Also, thanks for letting me know about diagnostics, I wasn't sure where to start, I'll keep that in mind for next time. 

     

  2. 4 hours ago, Squid said:

    If you're running a VM as your primary machine, then you should have unRaid set to boot into GUI mode (and have any crappy old monitor / keyboard / mouse attached to the server) so that you can do stuff like this

    that's exactly what I'm doing at home, but when I want to make changes remotely, that's when it fails... I'm trying to use VM@home as primary device, but yet I have to have another device on and running just so that I can make changes to the VM... 

  3. I'm not sure if this is VM issue or Windows issue.

    but I'm trying to pass through PEXUSB3S44V to windows, and although it looks like it was successful, I'm unable to use it in windows VM environment.

    I've updated the driver, and now I have 8 devices:

    - 4x Renesas USB 4.0 eXtensible Host Controller. "This device is working properly"

    - 4x USB Root Hub (USB 3.0). "This device cannot start. (Code 10) {Operation Failed} The requested operation was unsuccessful."

    I'm not really sure what to do at this point since whether I uninstall the driver or start over, I always get stuck here.

    Any reco on what I may be able to do?

  4. Hi there,

    I have the pcie usb that has 4 dedicated controller, but I cant find a way to passing through the device. 

    my system device shows:

    IOMMU group 21:[1912:0015] 06:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02)

    IOMMU group 22:[1912:0015] 07:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02)

    IOMMU group 23:[1912:0015] 08:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02)

    IOMMU group 24:[1912:0015] 09:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02)

     

    so, I entered below to usb flash

    label Unraid OS
      menu default
      kernel /bzimage
      append vfio-pci.ids= 1912:0015 initrd=/bzroot

    but I still don't see anything in the "Other PCI Devices"

     

    anythoughts on what I can do?

     

  5. So, I have 2x Xeon E5-2667 v2, which came with 8 core each. I'm thinking about doing a CPU pinning to get the max performance, but I'm abit lost as to which one I should be pinning...

    here are my current usage

    Docker:

    1. Plex

    2. Couchpotato

    3. Sonarr

    4. Deluge

    5. sabnzbdvpn

    6. Krusader

    7. OpenVPN

     

    VM

    1. Gaming VM

    2. Work VM (domain joined | Need As Many Cores as possible)

    3. Other various VMs x2 (do not need much of processing power)

     

    I'm assuming I need to pin Gaming VM to the CPU that has GPU PCIe connected, and I guess I want to limit to that CPU (lets say this is CPU1)... How about others? Should I pin everything to CPU2? Or is there a better mix?

  6. I've been thinking about upgrading the current unraid rig that has Core i5 4590S with 32GB of RAM and bunch of IBM SAS PCIs. This rig was fine 3 years ago, but now, processor load is pretty much maxed out all day especially when I have multiple people streaming plex, and it can barely handle 3 windows VMs.

    Since I’m upgrading the unraid server I’m thinking about also getting rid of my gaming PC that has 4790K, Nvidia 980, 32GB RAM and consolidate into the new unraid rig.

    In terms of the new unraid box, I’m considering to get Dual e5-2667 v2 on GIGABYTE GA-7PESH2 (going to follow the good old JDM_WAAAT community) with 128GB of 1866 DDR3 and 1080 ti.

     

    Now, since LGA2011 is 7 year old socket, and 2667 is 5 year old, and only supports DDR3, I cannot re-use any of these parts in the future… Do you guys think going with Dual e5-2667 v2 + 1080 ti is worth it??

     

    Do you guys think I can do better? I need SAS, and I would like to have impi and 10GbE

  7. I have an unraid on server that has ~20 3TB HDDs, with low wattage I5 and 36GB of memory, and then a gaming rig with 36GB, GTX780 and 4790k .

    With the new GTX, I'm thinking about rebuilding my gaming rig, but then I thought may be I should get i9 / thread ripper  or dual E5-2667 and then combine the unraid server and the gaming pc. But that was really my enthusiast thinking, in reality, are there any point of doing this? Considering I already have dedicated unraid server, and is working fine, I don't know if I  necessary have to change it... I have couple users using unraid, so I didn't really want them trans-coding killing my gaming performance either. Also, I'm worried about power consumption.  my gaming rig is turned off every time I'm not using it, if I combine it with unraid server, it has to be on 24/7. I didn't need my wife blaming me for high hydro bills.

     

    thoughts? May be I'm not being creative enough to think about advantages replacing the dedicated unraid server and sharing it with gaming rig

  8. 11 hours ago, binhex said:

     

    From the FAQ:-

    •  If you are using PIA as your VPN provider then this will be done for you automatically, as long as you are connected to a endpoint that supports port forwarding (see list below) AND STRICT_PORT_FORWARD is set to "yes". If you are using another VPN provider then you will need to find out if your VPN provider supports port forwarding and what mechanism they use to allocate the port, and finally configure the application to use the port.

    PIA endpoints that support port forwarding (incoming port):-

    
    ca-toronto.privateinternetaccess.com (CA Toronto)
    ca.privateinternetaccess.com (CA Montreal)
    nl.privateinternetaccess.com (Netherlands)
    swiss.privateinternetaccess.com (Switzerland)
    sweden.$privateinternetaccess.com (Sweden)
    france.privateinternetaccess.com (France)
    ro.privateinternetaccess.com (Romania)
    israel.privateinternetaccess.com (Israel)

    Thanks how can I set STRICT_PORT_FORWARD to yes? I don't see that in the option

  9. I had a drive fail, I stopped the array and in order to find which drive has failed I had to yank out the drives. I replaced the drive that had failed and restarted the array. while doing so, two of my drives became unmountable. I went through file system status check, but I cannot decipher what the issues are.

    Can you guys help me what I need to do? when I restart the array, those drives are still unmountable...

     

    Phase 1 - find and verify superblock...
            - block cache size set to 3047792 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 1653481 tail block 1653477
            - scan filesystem freespace and inode maps...
    sb_fdblocks 221746390, counted 222241018
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 2
            - agno = 1
            - agno = 3
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.
    
            XFS_REPAIR Summary    Tue Nov  7 20:22:57 2017
    
    Phase		Start		End		Duration
    Phase 1:	11/07 20:22:40	11/07 20:22:40
    Phase 2:	11/07 20:22:40	11/07 20:22:42	2 seconds
    Phase 3:	11/07 20:22:42	11/07 20:22:56	14 seconds
    Phase 4:	11/07 20:22:56	11/07 20:22:56
    Phase 5:	Skipped
    Phase 6:	11/07 20:22:56	11/07 20:22:57	1 second
    Phase 7:	11/07 20:22:57	11/07 20:22:57
    
    Total run time: 17 seconds

     

     

    Phase 1 - find and verify superblock...
            - block cache size set to 3047792 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 1368830 tail block 1368826
            - scan filesystem freespace and inode maps...
    sb_fdblocks 244072797, counted 244567425
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 0
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.
    
            XFS_REPAIR Summary    Tue Nov  7 20:27:19 2017
    
    Phase		Start		End		Duration
    Phase 1:	11/07 20:27:18	11/07 20:27:18
    Phase 2:	11/07 20:27:18	11/07 20:27:18
    Phase 3:	11/07 20:27:18	11/07 20:27:19	1 second
    Phase 4:	11/07 20:27:19	11/07 20:27:19
    Phase 5:	Skipped
    Phase 6:	11/07 20:27:19	11/07 20:27:19
    Phase 7:	11/07 20:27:19	11/07 20:27:19
    
    Total run time: 1 second

     

     

  10. 39 minutes ago, tdallen said:

    Based on your workload an E5 would have seemed appropriate, and (simplistically) the i9 looks like it will be a faster, consumer oriented chip similar to the current 10 core E5's.  I'm not sure where Intel is going with their branding, E7, Core i9, etc.  That i9 is going to be a very expensive chip, though - an E5 would be a better value.

    I thought i9s would be more reasonably priced since it's supposed to be mainstream processors.

    I wasn't really keen on getting x99 since it's somewhat outdated, and I wasn't interested in upgrading it anytime soon after I build the new rig.... Other than the price are there any advantages with going E5 vs i9?

  11. I started unraid system with low energy core i5 thinking the power consumption is the most important considering the rig would be on 24/7.

    A few years later, turns out, my gaming rig (i7 4790K/nvidia1070) AND unraid rig are both on 24/7, and this is for a particular reason. My gaming rig, which I use for occasional gaming/VR/etc, also records sports games / foreign shows overnight (this uses ~35% of processing power), and I then edit the recorded shows using premiere.The edited shows are passed on to one of the VMs on unraid for encoding. I use the gaming rig for recording shows because it drops framerate as soon as CPU is maxed out.

     

    Obviously, I did not anticipate the whole encoding stuff when I built the unraid rig, but now that I realized how much horse power I potentially need, I started thinking about getting one a rig with a decent number of cores/threads and merging gaming PC and unraid altogether.  I was originally thinking about building a new rig with xeon-d, but I couldn't find a mobo with enough SAS + PCI-E x16 slots. Then I heard about the recent core i9 news, so now I'm thinking to go with Core i9, find a decent x299 mobo with ipmi.

     

    Here is my workload for both gaming and unraid:

    GAMING RIG:

    - Recording shows using Elgato HD 60 S in 1080

    - Video editing using Premiere

    - Occasional video games

    - Occasional Oculus VR

     

    Unraid:

    - Regular VFS/smb stuff

    - Plex powering 2-3 users

    - 3 VMs (all Win 10); One of the VMs encoding

    - Sonarr/CouchPotatoes

    - Torrent/Usenet

    - Crashplan

     

    What are your thoughts? do you think core i9 is an overkill? I'm also concerned about power consumption, the whole reason I went with LE i5 to begin with.

     

    whats your thoughts??

     

  12. I had installed windows 10 using SeaBIOS because I was having issues with resolutions. 

    But based pin the pinned topic, I'm learning that OVMF would be much better choice for CPU pinning.

    Are there anyway to switch BIOS? I would rather not have to re-install windows / Adobe Encoder if I didn't have to...

  13. I updated my unraid server to 6.3.4 this morning and my user share stopped working...

    everytime I try to add a new file to my user share, Windows says "You need permission to perform this action" Any reason why I'm getting this error?

     

    Edit: I realized it's actually happening to 1 folder within the user share. Other folders within the same usershare are working fine

×
×
  • Create New...