• Posts

  • Joined

  • Last visited

Everything posted by segator

  1. Have a look on the repo there are instructions and explanation, if you have questions after read that let me know
  2. Guys have a look on my app, I published a year ago This app automatically link your nodes, and in case you have dynamic public IP it detects changes too. Now it works fine on unraid
  3. Hello guys someone can give me instructions to build the kernel and package for unraid 6.8? I need to add some modules to kernel Thanks!
  4. Wireguard++ I created a simplified app to create wireguard mesh networks with docker, maybe can be used as application for unraid, but first we need the kernel module! I use wireguard for network overlay for my multisite kubernetes cluster. Now I would like to be able to add baremetal unraid as a worker
  5. It works awesome!!! on my asrock ep2c602, but it seems the fan control don't see one of the FAN's CPU_FAN1_2, but it's weird because in the sensors page I see it and his values.
  6. Hi to everyone, sorry to take this topic abandoned, I decided 1 year ago to don't use anymore unraid(No snapshot support, Switched to glusterFS) and I Forgot to answer replies. Well I see pretty interest in this docker, I don't know the situation of Xpenology right now, it does support latest Synology versions? Now i'm going 2 weeks to japan, when I return I will try to take time to update and simplify the docker image.
  7. Hi guys, I create a first version of Xpenology on a Docker(Running inside a VM) Install Xpenology it's a mess on, so I simplfied the process you only need to execute one docker command and you have your Xpenology instance working on any Host machine (with KVM support) I Created this docker because I wanted to have power/beauty Synology UI but I wanted Hard drive managment of unraid. So you can access your files using NFS/glusterFS from xpenologyDocker to your unraid Host and get all the awesome features of DSM. I didn't push the final image to the docker hub because I don't know if could be ilegal to push and image with xpenology on it. So you need to build it before, but it's very simple and easy.
  8. Hi guys, I finally create a first version of Xpenology on a Docker Check it all, I have DSM & hyperbackup with unraid server working
  9. guys I find out something you will like DSM 6.1.3 + ETH Virtio + Hyperbackup working(yes the data is in the unraid host mounting with glusterFS) in the Proxmox thread on xpenology forum someone compiled a version of jun loader with virtio driver works perfectly. And Playing with petaSpace I find out that internally use glusterFS and works also with hyperbackup so I tried to create a glusterFS docker over unraid server and mount it in the xpenology VM and hyperbackup works I thinking to create a docker with the DSM Ready to use image for all of you. I think lot of people could be interested.
  10. I am looking for a solution I can have the benefits of unraid + snapshots - JBOD - Parity Drive - Only spin up disks where files been used are located - See all drives as a single volume - If 1 or more disk fail only lose data of these disks. - SSD Cache Write Disk -Also would be interesting have SSD Cache Read Disk (avoid spin up drives on more readed data) - Snapshots For the moment I only be able to have all this features using glusterFS + LVM, but I would like to have the simplicity of unraid OS(Pendrive OS based), UI and notifications of disk failures and SMART
  11. OS: Unraid 6.2 Pro CPU: Dual Xeon E5-2670 - SR0KX stepping Motherboard: Asrock EP2C602 RAM: 32GB ECC buffered 1333 mhz quad rank Case: nanoxia deep silence 5 Drive Capacity: 16x3.5 (5 hot plug 11 intern) (Motherboard only have 14 sata, I will need a HBA in the future), 1 have 5.25 bay empty so I can add a enclosure 6x2.5 if i want Power Supply: EVGA SuperNova 850G2 Fans: 1 Rear, 2 cpu fan, 3 HDD fan Parity Drive: 8TB Seagate Archive Data Drives: 8TB seagate archive 2x2TB Seagate 4TB WD red 6TB WD Red 2x3TB WD Green Cache Drive: 1TB Seagate (normal drive) Array Capacity: 28TB Services: Plex,Deluge,TVHeadend,Jenkins(automated video encoding to microHD after download),Rancher(Docker host clustering UI),(Test Purpose)Windows 2012 Server, Centos, RancherOS, Windows 7(Rendering machine),crashplan,duplicati Used for media center and cloud storage server and lab-server, I usually work with simulating clusters to test different cases so I need to create multiple VM. And My GF need a lot of power to do 3d render. I know the case it's not the best but I can't handle waste 300$ for a case... and I had some problems find cheap cases with EEB form factor this case can handle with the 5x3 enclosure 16 HDD, for me it's enough. very very silent!!! it's difficult to know the server is turned on by ear. very good performance, first days I was trying to have 2 VM gaming and work perfect! no errors about power consumption I'm more or less over 100W in idle and 370W at full
  12. Hi sorry it's 850W the psu about maybe i'm wrong but i thought your power supply is limited to the sai output, it isn't? my sig is fine is exactly the machine were i having this trouble. I though the problem was the power supply but I changed the sata cable of the 3 drive that I get disconnections and changed to an other sata controller, ran parity check and no errors and no disconnections. It's weird to think about this 3 cables have problems, and I have more HDD pluged to this "problematic sata controller" and are working fine. So i'm not sure what is the problem
  13. as far as i know it's not possible you need to passthru whole usb controller or single divices, but I remember someone talking in other forum about some usb hub with integrated usb controller(maybe works)
  14. I just have working DSM 6.0.2 on unraid, check xpenology forum. Mandatory set eth to e1000 and HDD to sata I pushing this guys to add the virtio drivers work pretty fine, but my first objective to install xpenology was to use the new feature of DSM6 "hyperbackup" but DSM doesn't like to create backups from NFS sources(yes i have my unraid shared folders mounted with NFS) maybe exist a way to cheat the OS to think it's hard drive directory not a mounted FS. Incremental/deduped + encryption + versioned backups on different cloud providers the major feature is that you can see a timeline in the file explorer with all the version of a single file/folder. so it's very interesting for me this feature. anyone know an webapp or something to do exactly the same? Example -->
  15. Wow, when they are going to deliver? I'm thinking to buy 2(I'm from spain and send to spain )
  16. Hi 2 week's ago I changed my server to more powerfull server with dual xeon e5-2670. was working pretty fine but yesterday the parity check was activated(1 every month the first running on this server) And surprise I get on 3 drives multiple read/write errors, the powersupply have 700W but I have a sai with 480W, could be the problem? because doesn't have sense to have 3 drivers with errors. I restarted the server and now i'm getting a drive disabled, Maybe the sata cables, maybe the controller? I have the MB what should I do, i don't think have any problem with the HDD, checking the S.M.A.R.T data all is fine. EDIT: I checked the HDD, all are fine, i suspect something releated with the PSU because the 3 HDD are connected to the same PSU electric cable(the typical cable with 3 sata connectors) For now I replaced the sata cables and conected this 3 HDD to different controller.(SAS controller)
  17. Depends on what do you want exactly you should set one thing or other The problem is that xpenology don't have the kvm virtio drivers, for this reason you need to set as a sata/scsi or ide device(best sata, because have better performance than scsi or ide) If you want to create a virtual disk(single file inside your unraid system must be in the cache or outside the array!) you can set something like that <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/SSD/VM/Xpenology/vdisk1.img'/> <target dev='sda' bus='sata'/> <boot order='2'/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </disk> you also can passthru a disk to the VM or a LVM partition. What I did? create a virtual disk (file image) and inside the Xpenology use NFS to have access to unraid Shares) and yes you can use download station fiel station with your unraid shares but the performance it's not the best. anyway you need to create a vdisk to allow xpenology to have space to write the OS files and applications
  18. hda is used by the ISO, why do you want to change the vdisk to hda?
  19. Hi this is my XML i have 2 vdisks one running on my SSD "unasigned" and other running on my 1tb cache disk. <domain type='kvm'> <name>Xpenology</name> <uuid>430099b3-50e1-c662-4346-fd88160550a7</uuid> <metadata> <vmtemplate name="Custom" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.3'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/SSD/VM/Xpenology/vdisk1.img'/> <target dev='sda' bus='sata'/> <boot order='2'/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/software/KVM/XPEnoboot_DS3615xs_5.2-5644.5.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/VM-Cache/Xpenology/vdisk2.img'/> <target dev='hdb' bus='sata'/> <boot order='4'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='sata' index='1'> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </controller> <controller type='sata' index='2'> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </controller> <controller type='sata' index='3'> <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='4'> <address type='pci' domain='0x0000' bus='0x07' slot='0x05' function='0x0'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='pci' index='3' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x02' function='0x0'/> </controller> <controller type='pci' index='4' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x03' function='0x0'/> </controller> <controller type='pci' index='5' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x04' function='0x0'/> </controller> <controller type='pci' index='6' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x05' function='0x0'/> </controller> <controller type='pci' index='7' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x06' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:e6:78:64'/> <source bridge='br0'/> <model type='e1000'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:e6:78:63'/> <source bridge='virbr0'/> <model type='e1000'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='' keymap='en-us'> <listen type='address' address=''/> </graphics> <video> <model type='cirrus' vram='16384' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </memballoon> </devices> </domain>
  20. Hi guys I supose that you like as me wants to have a very low powered NAS with lot of TB of media and have running a plex server. But the main problem is low power = slow transcoding. So I view a topic in the Plex forum talking about remote Transcoding, the guy who talk about it had a script in python to execute transcoding in remote machines. So awesome. the main problem is that you needed to setup it manually. I wanted to have it automatically using cloud computing. it's for this reason that I wanted to do a proof of concept and I create a plex docker with this automtic feature If you want to try it and give me some feedback or help me with the development. go ahead here is the sources and the docker and docker compose. The dockers are in docker hub ready to use. I couldn't do it work over unraid I needed to create a KVM with an ubuntu. The problem it's the NFS server docker it seems that not work over unraid I don't know why, i'm not an expert, if you can help me I appreciate it. here is the link with all.(This is my first project on github, tell me if I have something wrong) Thank guys
  21. I have working pretty well xpenology KVM over unraid. The key to have KVM working is change the virtio harddrive to scsi o sata virtual model and virtio network to e1000. I don't know exactly but in a proxmox server it works with virtio network & scsi HDD but in unraid 6.1 need to be scsi/sata and e1000. About using your unraid shares in Xpenology for now the only way I find out is using NFS, it works pretty well, maybe it would be better use 9p kvm sharing but I cannot be able to do it work in Xpenology I suppose kernel need to be ready to that and seems it's not the case with xpenology. About your problem instalation that the system tell to you that no HDD is because you have the virtio driver, change it to scsi/sata and it works.(you need to modify the KVM xml file)
  22. Hi guys, I'm thinking the posibility to join my personal game machine with my NAS. But i want to know if its possible to shutdown the GPU when the VM that have the GPU passthru is powered off. I don't want to have the gpu 24/7 if I only use my personal computer 2 o 3h x day. I have a gforce gtx 780ti the motherboard and cpu to be decided (I'm open to suggestions) I've thinking on i7 6700K and 32GB RAM and a 2x ibm m1015 to plug my 12 HDD drives
  23. HI to everyone. I have Xpenology VM with NFS mount from unraid Host, work fine but the performance is poor I only get 30MB-60MB/s of upload-download. I wanted the hard drive unraid system + Xpenology web UI. for online it's work pretty fine but if I planing of use smb/NFS from xpenology in LAN performance it's poor. I have the network in e1000 because the virtio driver is not recognized by xpenology and the HDD is in SCSI driver because the virtio is not recognized too. Any of you have the same system as me? do you know how to improve the speed? Thank you guys.
  24. Hi guys I was trying to make works cache_dirs plugins but when enabled always is scanning my disks with the find command so I disabled. I was playing with cache pressure 0 in one server (non unraid) an works perfect first time ls -l -R take it's time but second and thirst time it's almost instant. but when I set the sames values (cache presure 0 and swappiness) don't work always spin up disks and access to get the dir information with ls -l -R.. I want to avoid that applicaitons that scan directories spin up the drives. I have 4GB of ram and when I use the ls -l -R first time when finish I get 2GB free Ideas? thanks
  25. Well I checked while cache dirs is running and I have 3,5GB free acording to free -m command information If you're using windows explorer to check, be aware that it will inspect the files which is not part of what is cached, just the directory entries. You need to use a file client that wont inspect the files themselves. The best way to check if a disk directory is part of the cache is using a linux command shell and typing "ls /mnt/user/share/dir/subdir" or "ls /mnt/disk#/share/dir/subdir". If the disk is spundown and cached by cachedirs it should not spin up using that command. You're right, I have a binded smb directory to my windows computer, it's seems it's the problem. When i Start my computer the NAS drives automatically spin up. Using NFS it work fine, it's possible to use my cache drive as a read cache? something like NFS cachefsd that works very well I have read cached 1TB of my most viewed media in one server with plex installed.