Jump to content

Jcloud

Members
  • Posts

    632
  • Joined

  • Last visited

Everything posted by Jcloud

  1. The Docker reports tunneling due to a choice I made making template, If you want to remove that tunnel, in the template settings change "Network Type" from, "Bridge" to "Host" and then remove "Storj network ports" (which is a port forward from host to container, the source of your tunnel). I'm running it on the host port, I did notice the reported Delta went down; perhaps I made a bad setup choice. My logic at the time of setup, was that containers should be fully contained, also thought it would make potential port conflicts easier to change/explain on forums. Sorry nuhll I don't have those numbers yet, just been doing this for about the last few weeks myself -- for something to tinker with and learn how to setup Docker containers. Okay to be fair I have space, but not payout, I've allocated 3TB to Storj Daemon since the time of my OP I've shared 8.02GB. Willing to report back that number once payout occurs. I simply made a template, which launches the stock Storj container, but I'll see about maybe taking things to next level and making a custom repo with stats. May be a while, we shall see what life throws at me.
  2. Sorry about confusion, you definitely don't have one there. Here's mine: https://i.imgur.com/EUpZqzl.jpg And, yeah 'apply' definitely re-builds it each time. I think the save button is activated, by going to ://YourBox/Apps/ca_settings Advanced "Enable Template Developer Mode?" --> Change to Yes and apply.
  3. Old topic, and sorry if this is the wrong place to propose this. Wha Another possibly might be to have it /boot/previous, and make "previous" entries in syslinux menu. Have a sort of ring-buffer of current past, and on an upgrade the previous would get flushed out? I'm splitting hairs on implementation, and it's just a suggestion. But in general, I'm +1 for this being a useful troubleshooting feature for people.
  4. Looks fine to me, looks like you've been allocated two units, but you're correct, you haven't gotten any data yet. As far as I'm aware, this is nominal behavior, but just in case Storj community has an active chat channel, Storj Community jump page. You could post screenshot of the status, and ask the question, if there is something wrong they'll know for sure. If there is something that needs to be tweaked on the template, certainly can do that.
  5. Good call. I had seen requests in forums, and was just trying to fulfill, I may have I opened a large can of worms. I've actually run into the issue you mention, GMAIL flags everything I've sent as SPAM. The bigger issue has been receiving email. Emailing from outside to Poste has resulted in no delivery/stuck in mailserver queue (not bounced). It's been over 15 years from the last time I've worked on a mail server, DNS, and delivery systems.
  6. Application Name: Poste.io <==> "SMTP + IMAP + POP3 + Antispam + Antivirus + Web administration + Web email ... on your server in ~5 minutes." Application Site: https://poste.io/ Docker Hub: https://hub.docker.com/r/analogic/poste.io/ Template-Repository: https://github.com/Jcloud67/Docker-Templates INITIAL SETUP: 0. Requires registered FQDN to send/receive external email. 1. Following ports are used by container for mail: 25, 110, 143, 443, 465, 587, 993, 995 2. Following ports are used by container for webui: 443, 8280 (These may conflict, check your ports) 3. Make a user share for mail data, default is /mnt/user/poste 4. Some or all mail ports may need to be opened, forwarded, or dmz for mail send/receive to work. Optional arguments -e "HTTPS=OFF" To disable all redirects to encrypted HTTP, its useful when you are using some kind of reverse proxy (place this argument before image name!) NOTE: Marked as BETA, simply because author is not an expert in email exchange servers -- software itself looks pretty good.
  7. Interesting, I'm also running on a user share myself (no cache drive) and it is working, allocated 3TB in Storj, two disks for unRAID user share. So far Storj has only given me 6GB of data. I wonder what the differences are? You're welcome to post the your unRAID diags from the tools menu. Perhaps that might shed some light on differences between yours and mine. I made the user share via unraid web interface first, then start the Docker, that way unRAID had what permissions it wants for the folder -- because it made it.
  8. I'll see what can be done. Full disclosure, I just made the template and the instructions are sum of things I found so far, to get it working with the Docker container I found. For future, you can make any changes without Docker remaking the whole thing by clicking on, "save" button instead of, "apply." (ver 6.4.1) and it will keep your current image/configs. The container will have to be restarted for the change to take effect.
  9. Application Name: Cacti -- a complete network graphing solution designed to harness the power of RRDTool's data storage and graphing functionality. Application Site: https://www.cacti.net Docker Hub: https://hub.docker.com/r/quantumobject/docker-cacti/ Template-Repository: https://github.com/Jcloud67/Docker-Templates SETUP AND CONFIGURATION: --------------------------------------------- 0. SNMP poller default port is 161UDP. 1. If you have an SNMP poller plug-in already installed on your unRAID host, you'll find it will conflict with this Docker. Either uninstall the SNMP plugin you are using, or make the changes to have both work (author assumes you know what you're doing). 2. Has a webui, default set at 8180, adapt as needed to work on your host. 3. During initial setup the path to SPINE is incorrect. Change it to: /usr/local/spine/bin/spine 4. First login -- userid: admin password: admin FIRST RUN (Recommendation): --------------------------------------------- On left hand menu, Under "Automation" click on, "Networks" THEN "Test Network" on right-side main frame. In Subnet Range change this to fit your network. Click "Save" THEN "Return" at the bottom. Next click on the checkbox for "Test Network" THEN in "Choose an action" box click ENABLE --> GO Check the checkbox for "test Network" again THEN choose "Discover now"
  10. I think Micaiah12 beat me to the answer. How big is your storj unRAID user share does it have 12TB of free space? If you do have 12TB of free space, but it's chopped up between all your disks try the largest value of a given disk chunk. Perhaps it's an error trying to allocate over the disks (possible but I doubt it). I've been running the program for myself, only for the last week or so.
  11. Application Name: QDirStat -- Graphical view of directories, find files taking up the most space. Real MovieOS stuff! Application Site: https://github.com/shundhammer/qdirstat Docker Hub: https://hub.docker.com/r/mjdumont1/qdirstat Template-Repository: https://github.com/Jcloud67/Docker-Templates UI is through RDPclient, youTowerName:33389 Template defaults /mnt/user/ on host, as the container location /files/. Application Name: Storj.io daemon What is Storj? https://www.storj.io/ Application Site: https://github.com/Storj/storjshare-daemon/ The command line version, of this. Docker Hub: https://hub.docker.com/r/oreandawe/storjshare-cli Template-Repository: https://github.com/Jcloud67/Docker-Templates Storj is a Crypto-asset and P2P cloud storage service. This Docker runs the back-end client for Storj allowing internet users to rent their disk space and earn STORJ an Ethereum asset. @Jcloud makes no guarantee that STORJ or ETH will retain, or increase in fiat value.** This is only the Storj Daemon and CLI tool. SETUP REQUIREMENTS: ------------------------------------- 1. TCP ports 4000 open on host and setup on container (should be set below) Ports 4001-4003 are used to tunnel a connection if 4000 is closed, but this typically results in no data being shared. Port(s) 4001++ are used when running multiple nodes. 2. Path on host for Storj data to sit. IE: make a user share, /mnt/user/storj or similar. 3. Your Ethereum-based wallet address. 4. The max allowed space STORJ can take up on the host; remember to make it less than space available on host. (500MB, 1000GB, 1TB, 8TB Note 8TB is maximum size per node) STORJ DAEMON STATUS: (In your command line) docker exec Storj storjshare status ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Application: Storj daemon + CLI tools; storjstat.com monitor script (a fork of the official repo, which is above). . . . OR Rate My Storj Application Site: https://github.com/Jcloud67/docker-storjshare-cli Docker Hub: https://hub.docker.com/r/zugz/r8mystorj/ Template-Repository: https://github.com/Jcloud67/Docker-Templates This repository, or Docker container, is in response to forum feature requests. Requests, which couldn't done easily without the source. features: Base container you all know: Storj DAEMON + CLI tools. Installed, storjstat.com monitoring script AND a place for users to put API-key in the template on webui! (DOES NOT YET AUTO-LAUNCH) Now supporting building and launching of multiple farming nodes, AND a place for users to put it in the template on webui! Improved webui template, from lessons learned in past two months. Changed default network type from Bridge to Host (seemed to work out better for most and/or preferred) *feature? A change.* Auto launch of storjstat.com monitor script, and staying on. Currently working on, or trying to improve: Saving storjshare state after loading multiple nodes. entrypoint seems to do this for me. auto-building updates on some periodic function. Setup requirements are the same as Storj. Running the storjMonitor and/or multiple farming nodes are optional. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sia-Coin Sia-coin website: https://sia.tech/ Sia-coin client utilities: https://sia.tech/get-started Docker hub site: https://hub.docker.com/r/mtlynch/sia/ Original Docker install instructions: https://blog.spaceduck.io/sia-docker/ Repository site: https://github.com/mtlynch/docker-sia Template site: https://github.com/Jcloud67/Docker-Templates Sia-coin, from their website, “Sia is a decentralized storage platform secured by blockchain technology. The Sia Storage Platform leverages underutilized hard drive capacity around the world to create a data storage marketplace that is more reliable and lower cost than traditional cloud storage providers.” *** DISCLAIMER *** I have no affiliation with Sia-coin or Mr. Lynch. I was asked if Sia was possible? I found someone already invented the wheel, I just made the hub-cap. I do not guarantee, nor will hold any responsibility, for loss or corrupted data; or that Sia-coin (Sia), Bitcoin (BTC), or any other “crypto-currency” will gain or maintain their current fiat value. *** Please, with any and all, “ICO’s,” “exchanges,” “tokens,” and cryptos’ do exhaustive research before any transaction. *** SETUP: 1. User share for Docker data, I just called it, “Sia”: Example 2. Punch hole in firewall for outside to your docker's ip address, TCP ports 9981 and 9982. DANGER! : Sia also uses TCP port 9980 for command & control, do NOT expose 9980 outside of your network! Failure to follow this rule can result in wallet and Sia host hijack. 3. Download the Sia client from download URL, if you want a GUI experience. 4. Setup Sia template and docker container on unRAID using CA store. 5. For GUI clients, running on a VM or on the LAN: Start the Client and Click on "About" After closing Sia client, and the file manager opened to Sia's configuration files, edit the config.json file Change "detachted": false, to "detached": true, In the "address" line change "127.0.0.1" to your IP address of your Sia container. Save config.json file. When you Restart Sia, it should connect to your docker:
  12. Challenge accepted. Found a Cacti Docker, made a simple template and got it to load, couple of different problems cropping up so tinkering around. Not a dev, so I may be a while before I post something useful. But, it is out there in the wild.
  13. Short answer (ouch, bad pun) is it could be either. I would say it's more likely to be the PSU, but this level of failure (from your picture) I would play it safe and yank both. Also Frank1940's info is also good, well communitys of it really.
  14. In your BIOS try hitting F7 for Advanced, then boot menu -- somewhere in there perhaps? I don't have your mobo, or the manual up at this time, so I'm just generalizing here.
  15. Sounds similar to something I ran into (granted, these were Ryzen Threadrippers, perhaps it's with new AMD?). In your computer bios, the iron and not VM, what is your boot type set for? Is it EFI (only), EFI with legacy support, or legacy only? If you are doing the second one, are you booting into EFI first or into legacy? if you don't know the answer to this last one, does your /boot have a "EFI" or a "EFI-" folder in it? I have a problem on mine where EFI only causes error 43 with my GPU and I get 800x600 in Windows, because it can't load the graphics driver. Pure legacy mode would cause the error to go away, BUT the Windows guest would crash after about 5-6 hours. EFI with legacy mode, and booting using the legacy mode, makes for rock solid guest Windows. Here's some more background, from a different thread. I wouldn't worry about constantly activating Windows. Windows 10 will give you 98% functionality without activating. No activation in W10 means: Watermark at bottom right hand corner of screen, and lockout of personalize settings (desktop, screen saver, lock screen, colors, etc). Should be all the functionality you need for working on the Windows guest.
  16. Yes, just like your Windows installer ISO you need to mount the virt-io ISO as a cdrom. If you don't have the file for the virtio drivers yet, click on the "?Help" for your VM template, in the help text should be a URL to download the image file, just put it with your other ISOs and then reference it. This should popup the missing drive, if it doesn't you could open up Disk Management in Windows. Right-click on Windows button; choose "Disk Manager," then see if there is a cdrom device that is missing a drive-letter, and give it one. I doubt you'll need this last step. I hope this helps.
  17. System doesn't it's just JBOD. For Docker/apps I use the CA backup utility. For my VMs I make a manual copy of my vhd's to a vmBackup folder on my array. Everything else is either Mover for the shares. Which just leaves the Steam games I leave on the cache, and to me if I lost this I only see lost time as I just have to download it from Steam again (for me this isn't a problem). So you're correct, my cache is not protected - I'm fine with that, and take steps to cover my bum.
  18. If you don't need the mirror protection of the raid-pool but want to treat it as a JBOD pool that is also possible.
  19. Just got my power meter from Amazon today. Took some quick readings. Honestly surprised by the numbers I was betting on twice as much over all. Makes me wonder what my old 3930K power draw was, I would test but a friend is using it so that's out. Also just got a new-to-me replacement HBA from @TUMS, also discovered TUMS and I are about an hour apart. I'm still floored, honestly how many unRAID users could there be in Idaho? That's like being 1-step away, on Seven steps to Kevin Bacon. I haven't installed the card yet. it's my Friday, I finished work, and I refuse to do my job, even in the name of geek-dum , on a Friday evening. Power plug and a reboot yes, card swap even so trivial nah.
  20. Sure, make me your enabler. Seriously though, I'd say go for it, especially if you were already interested in this case. Functionally I'm very happy with my purchase, visually I still hate it -- looks like a cheap Iron Man knock-off, but I get the manufacture's choices on their engineering/construction. Biggest issue (non-) was when I took off the that manufacturing clear flim, the stuff on plastic to keep the glossy finish from being scratch/smudged, it took with it the two Z's from the logo. So I have an, "A A" case now. LOL All the bits that matter to me, are good. I went with tray-less models, I do and don't like the enclosures. As an enclosure, has everything I want. What I don't like are two bits: one the power button, or the blue leds, are kinda cheap in construction, and the drives some times needs a nudge on the door to be fully seated in connector to show up on the controller. The electronic switch is in the back of the bay, and is pushed by a clear plastic push rod (that blue led you see at front) and acts as light-pipe for LED; on one bay I found I had to press it "just right," to get it to turn on. As for the drive door. They used a stainless-steel leaf spring on the door which pushes against the drive, or in my case a different bay that needs a slight nudge after closing to register (so probably need to get a pair of pliers and just tweak it a bit). Over all I think I'm 50/50 on my enclosure choice, but would buy/try again. EDIT: 6/15/2018 -- In regards to the icy dock enclosures, I don't recommend them. I want to like them, and I bought a second one, but it too has a manufacturing defect - there's a bay which the on/off switch won't stay in the ON position. Given the MSRP of these Icy Dock products, and their flaws I wouldn't recommend them as a, "must buy" sort of product, they would be worth considering if on a fire-sale discount.
  21. Yeah, I'm reading the Intel's documentation now. Just saw the pci port on it, and I assumed it needed that for some reason (I.E. not an optional feature) -- although I was coming up blank for what that reason would be.
  22. Now you have my attention, and blew my mind. Not on the PCI-E slot? Time for me to go look at the specifications. I'd prefer keep my slots for GPU's. Thanks.
  23. Exactly what this is. Six years on my last combo, although I was on my second mobo. Probably could have ran my old system for another year, but Threadripper came out and boy did I have an itch that couldn't to be scratched! SSDs are on the motherboard, was planning on keeping the SAS card just hard drives. I should redo the cabling for all my sata cables; that's about when I got impatient with myself and just wanted it on. System is on, phone's flash washed it out.
  24. Most recent edit date: Mar, 4th, 2018. My new build, much like the hydra, is one part old bits, and the hacked off bits only to have grown back stronger and more wild. Also, like the metaphorical comparison of unRAID (and what it offers to the user) and that of the hydra: One head is HD protection and array; another head is Docker app server; another KVM; another NAS shares -- again split off from array. It's neck. And now this hydra has more head, ThreadRipper. The metaphor just fits for me. OS at time of building: 6.4.1 Stable OS Current: 6.7.2 CPU: AMD Ryzen Threadripper 1950X Heatsink: Noctua NH-U14S TR4-SP3 Motherboard: ASUS PRIME-X399-A , BIOS 0407 (at build), Running BIOS 1002 RAM: 128GB HyperX Fury DDR4 2666MHz , (2x) HX426C16FBK4/64 Case: AZZA Solano 1000R Drive Cage(s): (2x) ICY DOCK 5 Bay 3.5 SATA HotSwap Power Supply: Antec 900W HCG SATA Expansion Card(s): LSI Internal SATA/SAS 9211-8i Parity Drive: 4TB Western Digital "gold" WDC WD4002FYYZ-01B7CB1 Array Disk1: 3TB Western Digital "red" WDC WD30EFRX-68EUZN0 Array Disk2: 3TB Western Digital "red" WDC WD30EFRX-68EUZN0 Array Disk3: 3TB Western Digital "red" WDC WD30EFRX-68EUZN0 Array Disk4: 4TB Western Digital "red" WDC WD40EFRX-68N32N0 Array Disk5: 4TB Western Digital "gold" WDC WD4002FYYZ-01B7CB0 Array Disk6: 4TB Western Digital "blue" WDC WD40E31X Array Disk7: 3TB Western Digital "red" Cache Drive0: 250GB Samsung 850 EVO SSD Cache Drive1: 120GB OCZ-VERTEX460A SSD Cache Drive2: 500GB Samsung 850 EVO SSD Total Hard Drive Array Capacity: 24TB Total Cache Drive Array Capacity: 870GB (JBOD, not protected storage) Primary Use: To be my, "Fort Kick Ass." Gaming and mucking around with VMs; secondary protected data storage, nas functionaly, application server notes (wiki), handbrake, BT. Likes: I Linux distro powerful enough to do everything I wanted out of my computer, but easy enough for me a relative linux newb could use it. Dislikes: I'll have to come back to this one (even as a 2-year user). Yes, there can be, and often are some limited functionality of VMs, but that's not unRAIDs or dev's fault -- just the state of Linux technology, and microcode. First time I tried to touch Linux was in '98, what we have now . . . omg, NO complaints! LOL It's also more fun to accept the VMs quarks as a hobbyist than to tank the server, especially with the gains in protected storage. Plugins Installed: Community Applications; Dynamix Active Streams, File Integrity, Local Master, Scheules, SSD TRIM, System Information, System Statistics; Fix Common Problems; Nerd Tools; Unassigned Devies, unBALANCE Docker Containers: couchpotato, Sonarr, DokuWiki, Dolphin, dupeGuru, Handbrake, Netdata, RDP-Calibre, Transmission, Storj, QDirStat Future Plans: Going to get a VIVE-pro it will work on the VM. Case was chosen for the nine external 5.25" bays, so as time progresses the plan is to add 1-2 more of the drive bays for a total of 15 drives. If I get this full, the SSDs will be moved down to the bottom area where there is space of two more internal 5.25" bays. For port expansion I have the card, and my thought was to get an Intel RES2CV360 RAID Expander. The documenation on the SAS card says it can support like 64 devices, but just has the ports for 8, so I thought why not this direction? Updated HBA instead . No, seriously, if anyone knows if this will not work, I'll like to hear it. Answered, thank you. The Intel device just looks like SAS port replicator board, and has nothing to do with RAID protocol. POWER USAGE: Edit1: Just got power meter as of, Febuary ninth, so I now have values. Instantaneous readings/values, not avg over time. Boot (peak): 161.3 W Idle (avg): VM off: 149-153 W GPU VM On: 134-136 W Active (avg): Running STEEP in VM: 330-360 W vm idle (12 threads), 12 thread Handbrake trans-code: 300-330 W STEEP and continuation of Handbrake trans-code: 450-475 W Light use (avg): ## To be filled in later ## Thank you Lime-Tech developers and community developers who put in their time and expertise. I've been a very satisfied user, you all do good work. EDIT0: Been lurking a lot more on forums last month or two, came to conclusion that I under-utilize my cache on user shares. Added 500GB Samsung EVO to my cache pool, now reporting 870GB. Changed some user shares to include cache pool, which I had previously excluded use before. Next task, I need to finish cabling, drive cables are a mess and need to be redone. Next technical task, is try moving the LSI card back up in the PCIe slots, get working; then source a gpu card for player2, since previous card went to friend - current thought was to use the old Radeon 6800 as concept proxy, seeing how it hardly gets used. Still need to go shopping for a Killer-Watt. EDIT1: New SAS HBA is installed, another drive enclosure added. Upgraded parity drive from a WD blue to a WD gold drive, once the new parity drive was swapped, I added the old blue drive to the array for more storage. The parity check time from blue to gold cut about four hours off. Also got a GTX1050 to start testing a second VM, since that seems to be issue on Threadrippers, still. Although first boot with a simple GTX1050 has been very positive (ie its working).
  25. This thread is huge, and I haven't been following it, but to go full circle @coppit brought the latest proposed PCI patch (linux kernel folks) for Threadripper to my attention on another post. For those folks, who would be interested in testing it, in 6.4.0-unRAID, have a look here -- it's been compiled.
×
×
  • Create New...