Jump to content

segator

Members
  • Content Count

    104
  • Joined

  • Last visited

Community Reputation

4 Neutral

About segator

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi to everyone, sorry to take this topic abandoned, I decided 1 year ago to don't use anymore unraid(No snapshot support, Switched to glusterFS) and I Forgot to answer replies. Well I see pretty interest in this docker, I don't know the situation of Xpenology right now, it does support latest Synology versions? Now i'm going 2 weeks to japan, when I return I will try to take time to update and simplify the docker image.
  2. Hi guys, I create a first version of Xpenology on a Docker(Running inside a VM) Install Xpenology it's a mess on, so I simplfied the process you only need to execute one docker command and you have your Xpenology instance working on any Host machine (with KVM support) https://github.com/segator/xpenology-docker I Created this docker because I wanted to have power/beauty Synology UI but I wanted Hard drive managment of unraid. So you can access your files using NFS/glusterFS from xpenologyDocker to your unraid Host and get all the awesome features of DSM. I didn't push the final image to the docker hub because I don't know if could be ilegal to push and image with xpenology on it. So you need to build it before, but it's very simple and easy.
  3. Hi guys, I finally create a first version of Xpenology on a Docker https://github.com/segator/xpenology-docker Check it all, I have DSM & hyperbackup with unraid server working
  4. guys I find out something you will like DSM 6.1.3 + ETH Virtio + Hyperbackup working(yes the data is in the unraid host mounting with glusterFS) in the Proxmox thread on xpenology forum someone compiled a version of jun loader with virtio driver works perfectly. And Playing with petaSpace I find out that internally use glusterFS and works also with hyperbackup so I tried to create a glusterFS docker over unraid server and mount it in the xpenology VM and hyperbackup works I thinking to create a docker with the DSM Ready to use image for all of you. I think lot of people could be interested.
  5. I am looking for a solution I can have the benefits of unraid + snapshots - JBOD - Parity Drive - Only spin up disks where files been used are located - See all drives as a single volume - If 1 or more disk fail only lose data of these disks. - SSD Cache Write Disk -Also would be interesting have SSD Cache Read Disk (avoid spin up drives on more readed data) - Snapshots For the moment I only be able to have all this features using glusterFS + LVM, but I would like to have the simplicity of unraid OS(Pendrive OS based), UI and notifications of disk failures and SMART
  6. OS: Unraid 6.2 Pro CPU: Dual Xeon E5-2670 - SR0KX stepping Motherboard: Asrock EP2C602 RAM: 32GB ECC buffered 1333 mhz quad rank Case: nanoxia deep silence 5 Drive Capacity: 16x3.5 (5 hot plug 11 intern) (Motherboard only have 14 sata, I will need a HBA in the future), 1 have 5.25 bay empty so I can add a enclosure 6x2.5 if i want Power Supply: EVGA SuperNova 850G2 Fans: 1 Rear, 2 cpu fan, 3 HDD fan Parity Drive: 8TB Seagate Archive Data Drives: 8TB seagate archive 2x2TB Seagate 4TB WD red 6TB WD Red 2x3TB WD Green Cache Drive: 1TB Seagate (normal drive) Array Capacity: 28TB Services: Plex,Deluge,TVHeadend,Jenkins(automated video encoding to microHD after download),Rancher(Docker host clustering UI),(Test Purpose)Windows 2012 Server, Centos, RancherOS, Windows 7(Rendering machine),crashplan,duplicati Used for media center and cloud storage server and lab-server, I usually work with simulating clusters to test different cases so I need to create multiple VM. And My GF need a lot of power to do 3d render. I know the case it's not the best but I can't handle waste 300$ for a case... and I had some problems find cheap cases with EEB form factor this case can handle with the 5x3 enclosure 16 HDD, for me it's enough. very very silent!!! it's difficult to know the server is turned on by ear. very good performance, first days I was trying to have 2 VM gaming and work perfect! no errors about power consumption I'm more or less over 100W in idle and 370W at full
  7. Hi sorry it's 850W the psu about maybe i'm wrong but i thought your power supply is limited to the sai output, it isn't? my sig is fine is exactly the machine were i having this trouble. I though the problem was the power supply but I changed the sata cable of the 3 drive that I get disconnections and changed to an other sata controller, ran parity check and no errors and no disconnections. It's weird to think about this 3 cables have problems, and I have more HDD pluged to this "problematic sata controller" and are working fine. So i'm not sure what is the problem
  8. as far as i know it's not possible you need to passthru whole usb controller or single divices, but I remember someone talking in other forum about some usb hub with integrated usb controller(maybe works)
  9. I just have working DSM 6.0.2 on unraid, check xpenology forum. Mandatory set eth to e1000 and HDD to sata I pushing this guys to add the virtio drivers work pretty fine, but my first objective to install xpenology was to use the new feature of DSM6 "hyperbackup" but DSM doesn't like to create backups from NFS sources(yes i have my unraid shared folders mounted with NFS) maybe exist a way to cheat the OS to think it's hard drive directory not a mounted FS. Incremental/deduped + encryption + versioned backups on different cloud providers the major feature is that you can see a timeline in the file explorer with all the version of a single file/folder. so it's very interesting for me this feature. anyone know an webapp or something to do exactly the same? Example --> https://global.download.synology.com/download/Package/img/HyperBackup/1.2.1-0235/screenshot_04.png
  10. Wow, when they are going to deliver? I'm thinking to buy 2(I'm from spain and amazon.de send to spain )
  11. Hi 2 week's ago I changed my server to more powerfull server with dual xeon e5-2670. was working pretty fine but yesterday the parity check was activated(1 every month the first running on this server) And surprise I get on 3 drives multiple read/write errors, the powersupply have 700W but I have a sai with 480W, could be the problem? because doesn't have sense to have 3 drivers with errors. I restarted the server and now i'm getting a drive disabled, Maybe the sata cables, maybe the controller? I have the MB http://www.asrockrack.com/general/productdetail.asp?Model=EP2C602 what should I do, i don't think have any problem with the HDD, checking the S.M.A.R.T data all is fine. EDIT: I checked the HDD, all are fine, i suspect something releated with the PSU because the 3 HDD are connected to the same PSU electric cable(the typical cable with 3 sata connectors) For now I replaced the sata cables and conected this 3 HDD to different controller.(SAS controller)
  12. Depends on what do you want exactly you should set one thing or other The problem is that xpenology don't have the kvm virtio drivers, for this reason you need to set as a sata/scsi or ide device(best sata, because have better performance than scsi or ide) If you want to create a virtual disk(single file inside your unraid system must be in the cache or outside the array!) you can set something like that <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/SSD/VM/Xpenology/vdisk1.img'/> <target dev='sda' bus='sata'/> <boot order='2'/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </disk> you also can passthru a disk to the VM or a LVM partition. What I did? create a virtual disk (file image) and inside the Xpenology use NFS to have access to unraid Shares) and yes you can use download station fiel station with your unraid shares but the performance it's not the best. anyway you need to create a vdisk to allow xpenology to have space to write the OS files and applications
  13. hda is used by the ISO, why do you want to change the vdisk to hda?
  14. Hi this is my XML i have 2 vdisks one running on my SSD "unasigned" and other running on my 1tb cache disk. <domain type='kvm'> <name>Xpenology</name> <uuid>430099b3-50e1-c662-4346-fd88160550a7</uuid> <metadata> <vmtemplate name="Custom" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.3'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/SSD/VM/Xpenology/vdisk1.img'/> <target dev='sda' bus='sata'/> <boot order='2'/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/software/KVM/XPEnoboot_DS3615xs_5.2-5644.5.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/VM-Cache/Xpenology/vdisk2.img'/> <target dev='hdb' bus='sata'/> <boot order='4'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='sata' index='1'> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </controller> <controller type='sata' index='2'> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </controller> <controller type='sata' index='3'> <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='4'> <address type='pci' domain='0x0000' bus='0x07' slot='0x05' function='0x0'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='pci' index='3' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x02' function='0x0'/> </controller> <controller type='pci' index='4' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x03' function='0x0'/> </controller> <controller type='pci' index='5' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x04' function='0x0'/> </controller> <controller type='pci' index='6' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x05' function='0x0'/> </controller> <controller type='pci' index='7' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x06' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:e6:78:64'/> <source bridge='br0'/> <model type='e1000'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:e6:78:63'/> <source bridge='virbr0'/> <model type='e1000'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='cirrus' vram='16384' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </memballoon> </devices> </domain>
  15. Hi guys I supose that you like as me wants to have a very low powered NAS with lot of TB of media and have running a plex server. But the main problem is low power = slow transcoding. So I view a topic in the Plex forum talking about remote Transcoding, the guy who talk about it had a script in python to execute transcoding in remote machines. So awesome. the main problem is that you needed to setup it manually. I wanted to have it automatically using cloud computing. it's for this reason that I wanted to do a proof of concept and I create a plex docker with this automtic feature If you want to try it and give me some feedback or help me with the development. go ahead here is the sources and the docker and docker compose. The dockers are in docker hub ready to use. I couldn't do it work over unraid I needed to create a KVM with an ubuntu. The problem it's the NFS server docker it seems that not work over unraid I don't know why, i'm not an expert, if you can help me I appreciate it. here is the link with all.(This is my first project on github, tell me if I have something wrong) https://github.com/segator/PlexRemoteTranscoderDocker Thank guys