talkto_menow

Members
  • Posts

    122
  • Joined

  • Last visited

Everything posted by talkto_menow

  1. Hi I would like to install Dotnet Core (.Net Core) on my Unraid box to be able to run some console apps. I came across SlackBuilds, however for some reason I'm getting permission denied message. I run chmod 777 on downloaded files but that did not improve anything. https://slackbuilds.org/repository/14.2/development/dotnet-sdk/ There are instructions on how to install Dotnet Core on Ubuntu etc. I'm guessing that process would be similar on Unraid. Anyone tried and installed successfully Dotnet Core on their machine? I'm not interested on running Dotnet Core inside Docker container. Thanks
  2. Hi, I'm interested in your opinions, experiences and choices you made regarding online backup. I had Crashplan account and recently I decided to part away. After looking at various options I decided to go with Duplicati docker + online backup. Duplicati supports various services. However, not all services are created equal. Crashplan monthly fee was initially $5 per month. Then, when they made business changes, the price went up and at some point I paid $30 per month. I always had some problems with Crashplan docker, however I did not want to delete old backups. Finally, I decided to look for different solution. I like Duplicati docker reliability and ease of restoring files. But there is a question of online service. Amazon Drive is no longer supported. I have Microsoft OneDrive account (included with Office 365). They give 1 TB of online space. Additional 1TB cost is $10 per month. I looked at Amazon AWS S3 solution, but determining monthly cost is much more complicated that it should be. I am estimating that the most critical data that needs to be backed up (photos + documents) is at least 1TB. Probably 2TB is sufficient. Backing up entire shares at these prices seems to be prohibitive. Please let me know what backup strategies you implemented, what online services you use and how much it costs you. Thank you
  3. Thank you. I will use different strategy with filesystem conversion and swapping drives.
  4. Thanks. I know this procedure, however when you are replacing with new drive and you want to set new file format, unraid will perform formatting anyway. We are not performing conversion from RFS to XFS. Parity has no specific file format. Based on what you have said, part of parity that relates to swapped drive will be deleted. To tell the truth I do not know how is this possible without recalculating parity. Most of the users will not know this. They would simply swap drives and set desired file system in the process. They would definitely not know that their array could be affected, despite the warning message that appears next to new drive.
  5. How you supposed to change filesystem or replace drive (format it) when parity is affected by it?
  6. Unraid 6.5.2 I’m not sure if the data loss is caused by mine own actions or this is software bug. I was replacing drives in my server. I purchased two 12TB drives. One would replace parity drive, and second would replace one of array drives. I precleared both drives. The total array size was 15TB with only 200 GB free space left. These are the steps that I took: 1. Replaced parity drive 2. Rebuild parity 3. Replaced smaller drive with 12TB drive 4. Started array The process of rebuilding drive has started, but I realized that I forgot to format drive as XFS. By default, it was set to reiserfs. 5. I cancelled rebuild process & changed drive format settings to XFS 6. Unraid informed me that drive needs to be formatted 7. After formatting drive, the process of rebuilding of array has started again Rebuild was completed and I decided to replace another smaller drive with previous parity drive. This time I ensured that drive format setting was set to XFS. Once, unraid detected new settings, it formatted drive and proceeded with rebuilding array. I noticed that something is off when I started docker again. All my apps were missing. Then I noticed that the array used space is now only 9.7TB. The difference is equal to two smaller drives that I removed. Bear in mind that during second drive replacement I did not interrupt the process at all. Luckily, I still have two smaller drives intact. I’m copying the files from them back to new 12TB disk (disk to disk) I'm little bit reluctant to do another drive replacement in the future. As I said this could be caused by mine own actions, but cancelling rebuild process should not have such an effect. Before I proceeded made sure that parity is valid.
  7. I bought two 12TB drives. One will replace parity drive and second will be added to the array. UD is not allowing me even touch this drive, so I was worried that something went wrong during preclear process. Some people reported before that they had to use flag -A on their larger drives, but I think this issue was fixed already. Thanks
  8. I run the script on new WD 12TB drive using USB caddy with following statement: preclear_binhex.sh /dev/sdi The process completed, however Unassigned Devices is not allowing me to format the drive. Previously, before running preclear script, UD showed format button. Right now there is only mount button that does nothing. UD is not showing any size of the drive. It is also set to destructive mode. I did not see any errors during preclear process, my questions are: 1. Should I run the script again with -A flag? 2 Format drive using console/terminal command? 3. If I swap array drive with new 12TB drive, will unraid recognize it and allow formatting? Thanks
  9. Upgraded from 6.5.2 to 6.7.2. OS loaded and everything seemed fine until I played something through Kodi on my HTPC. Videos played for about 10 seconds and stopped. I tried multiple videos with the same results. I played through Plex on XBOX One and there were no issues. I checked on another PC. Tower was not longer visible under Network, however I was able to browse shares when I specified IP address. Rebooting Tower, HTPC and PC did not solve the problem. I had to revert to 6.5.2 and network issues disappeared.
  10. I created this script due to the fact that I was slowly running out of space on my unRaid box, The script allows for archiving optimized versions. Please read the article on my blog Python script will help you archive plex dvr recordings. Plex dvr is a welcomed addition to this great media server. However, before you start recording your favorite show, you should be aware that recordings can take a lot of space on your server. Plex saves the files as Transport Stream with the extension “ts”. On average one hour show will take about 6 GB of space. If you plan to record more shows and you would like to watch them at later time, you will eventually run into problem of not having enough of space. One option is to purchase larger hard drives. Other option is to compress recordings to smaller formats. Plex does not have this feature yet, however it is capable of compressing files into mp4. It is called optimized version. Once you decide to create optimized version Plex will create special folder within TV Series Season folder where compressed files are stored. This will allow you to stream optimized version to your phone or any other device. This is not an archiving feature. Once you delete original recording, the optimized version is also deleted from the the hard drive. This is where python script comes in. It will move and replace original ts recording with new optimized mp4 version. Before we can run the script we have to prepare computer or server to run the script. The script was created in mind that it will be run on Linux unRaid box. However, since it is python script it should be able to run on any PC that has Python installed. Please let me know if you have any questions Thanks
  11. Is it possible to have more than one instance of the openvpn-as docker running at the same time? Default udp port is 1194. Can you assign different ports for the second instance and would it be reflected in .ovpn profile file? Thanks
  12. I received ECC memory from Kingston (2x16GB) and I have to say installation was not that straight forward as I thought it would be. At some point I thought I have a defective RAM module or something is wrong with motherboard. I put first module (bank 1) and there were no issues with booting. Memory was recognized. Now I put the second one (bank 2) and to my surprise I got the error message that memory is not installed. This is exactly the same message that I saw with registered memory I had before. I swapped the memory modules (bank 1) and still error. I removed the memory that I know is working and left the other one. Again error. I made couple attempts removing and putting it back and finally it booted up. Now I know that both RAM modules are working correctly. I put second module (bank 2) too and again I get error message. Just like previously I had to remove and put it back couple times. Finally both modules were recognized. 32BG of RAM is maybe overkill, but I'm planning to run WIn10 VM with allocated 8GB + bunch of dockers including Plex server. 16GB is way to little for this setup. I've been running unRaid on 4GB memory and that is just not enough. It does affect responsiveness and Plex transcoding as well. Final thoughts: I'm glad I decided to go with this Intel Xeon. This is perfect CPU for the average user. It has enough horse power to do 3-4 (maybe more) simultaneous Plex stream. Previously when I was streaming something at my workplace performance was choppy and sometimes I was not being able to do that at all. I thought it was due to bad cell phone connection coverage. The reason for this was that my older server was not able to keep up with transcoding video. Thanks to the recommended Noctua CPU fan, the server stays quiet and temps are around 35C
  13. Thanks for the article. It looks like it might be the issue. Anyway. I requested RMA from Supermicro and it was approved. I will be shipping the motherboard next week. I also ordered memory that should be compatible with Intel Xeon CPU. It's pretty hard to find and based on other people reviews and experiences there should be no hiccups this time. I purchased the memory directly from Kingston. You have other resellers on Newegg and Amazon, but beware that most likely all of them buy these modules directly from Kingston and than they will mail it to you. You can search for compatible memory on their website https://www.kingston.com/en/memory/search?DeviceType=&Mfr=ASR&Line=Motherboard&Model=96303 Buying 16GB module is the best option. This way you can add one more module later to max out RAM requirement.
  14. I have a new server running right now, but unfortunately I was not able to install the memory I bought. Every time I put the either 8GB module in memory bank I was getting error message "Memory not installed PEI-Intel Reference Code Execution ...55" . I was not even able to get to BIOS. Motherboard was not responsive. I took out 4GB non-ECC memory from HTPC and put it into server and bingo everything works. I checked in BIOS and motherboard should recognize ECC automatically. So disappointed. I had to return it to Newegg. I'm not sure what to do. ASRock has a short list of compatible RAM, but on the other hand I'm worried it may not work just like the one I ordered. Plus DDR4 memory is so expensive right now, that you have to think twice before you purchase anything. Good news is that this motherboard supports KabyLake processors out of the box. It has BIOS v.2.30.
  15. @dmacias I will definitely request RMA from Supermicro This is what I ordered: ASRock E3C236D2I Mini ITX Server Motherboard LGA 1151 Intel C236 Intel Xeon E3-1230 v6 Kaby Lake 3.5 GHz 8MB L3 Cache LGA 1151 72W BX80677E31230V6 Server Processor Crucial 8GB 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model CT8G4RFS4213 Noctua NH-L9x65 92 x 92 x 14mm, 92 x 92 x 25mm SSO2 Low-profile Quiet CPU Cooler, NF-A9x14 PWM fan I'm hoping that this build will last more than 4 years. CPU has Passmark score of little over 10,000. I was looking for some way to cut corners, but at the end I decided that 16GB of RAM memory should be sufficient for 15TB array. I'm running 10+ dockers and I'm hoping that system will be good enough to record TV using Plex or HDHomerun docker. I wanted to have stable and reliable system, that's why I went with Intel Xeon and ECC RAM. I was also considering i3 CPU with 35W TDP. I have similar chip in my HTPC and I was not impressed with WTV streaming (browser). HTPC was pretty much choking when I was watching TV and doing streaming at the same time. Suprosingly, there were really good transcoding speeds when system was doing one job only . ASRock motherboard supports Kaby Lake CPU from BIOS 2.20. I hope that I will not have to flash BIOS. Otherwise I will have to use that HTPC i3 chip to do the job
  16. I purchased C2758 almost exactly 3 years ago in December. However, I do not know if I violated warranty in some way. Due to overheating I had to install Supermicro cpu fan that was similar to the one I had + fan on the top. I believe I still have the old one too
  17. I reached out to Supermicro Support to find out about the Intel Xeon support for this particular board and I do not have a good news. This motherboard is not compatible with Intel Xeon E3-1230 v5. It does support Intel Kabylake CPU's but with BIOS 2.0 or above. Supermicro has a tutorial how to create bootable USB flash drive for the purpose of updating BIOS. At this point I think that ASRock motherboard that tdallen recommended is a much better option. @tdallen Thanks for your mobo recommendation @t33j4y Thanks. I will consider this cpu fan.
  18. The ASRock board you mentioned looks very interesting and as per their support page it works also with other i3/i5/i7 CPUs which is a big plus. I may have to rethink my design. How is the noise level of E3 Xeon CPU with running stock fan? Xeon TDP is rather high, so my concern is that fan would be pretty loud. I cannot put my server in other room or closet.
  19. I hope someone can help me out. My Supermicro board running Intel Atom processor C2758 just failed completely and it would not post at all. I had to swap mobo for older Atom version, but it is so slow and occasionally entire system freezes that I decided to upgrade my system to something more powerful and more reliable. I do not want to replace chassis and power supply, therefore I need mini-itx motherboard. I was able to find this Supermicro motherboard: SUPERMICRO MBD-X11SSV-Q-O Mini ITX Server Motherboard LGA 1151 Intel Q170 According to specs it takes whole range of CPU's Intel 6th Generation Core i3 series Intel 6th Generation Core i5 series Intel 6th Generation Core i7 series Intel 7th Generation Core i3 series Intel 7th Generation Core i5 series Intel 7th Generation Core i7 series Intel Celeron Intel Pentium Socket LGA 1151 supported; CPU TDP support Up to 91W I was thinking about getting this CPU: Intel Xeon E3-1230 V5 3.4 GHz LGA 1151 80W BX80662E31230V5 Server Processor It's the same socket, so in theory it should work? Unless there is some compatibility issue. The same goes for memory. I checked on pcpartpicker.com for available ram modules an it looks like there is plenty to choose from. And again I'm worry about compatibility issues. Does anyone build something similar and can recommend some parts. Thank you
  20. I'm having problem with this docker. It shows that is running, but I cannot access it in browser. It seems to be installed properly and it is running. I specified Version = 16 in Environment Variables. I edited advancedsettings and copied sources. Kodi headless stopped updating at some point [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... ------------------------------------- _ _ _ | |___| (_) ___ | / __| | |/ _ \ | \__ \ | | (_) | |_|___/ |_|\___/ |_| Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/index.php/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-config: executing... [cont-init.d] 30-config: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] syncing disks. [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... ------------------------------------- _ _ _ | |___| (_) ___ | / __| | |/ _ \ | \__ \ | | (_) | |_|___/ |_|\___/ |_| Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/index.php/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-config: executing... [cont-init.d] 30-config: exited 0. [cont-init.d] done. [services.d] starting services I do not think is browser related. I tried opening webUI in IE and Firefox and it was not successful. I changed permissions too.
  21. I created a tool that would help you move Plex database between computers. It's available on github https://github.com/mpcdigitize/plexdbfix/raw/master/PlexDbFix.zip also on my blog http://mpcdigitize.com/blog/2016/07/24/plexdbfix-moving-database-between-computers/ Please let me know if you encounter any issues.
  22. As you mentioned before, you Library folder is very large. You should actually go through each folder and check the folder file size. This way you would have an idea why it so big. I do not think thumbs are cause of it. Always backup your library files before your proceed. Plex makes a copy of com.plexapp.plugins.library in its original folder (I think it's Plex Pass feature) Of course you can use your existing Plex/Library folder for new installation. However, I did this couple times and I had problems starting Plex server. Turns out you must delete this 2 files before you start Plex Server com.plexapp.dlna.db-shm com.plexapp.dlna.db-wal
  23. I was not clear. Plex server can be installed in different folder, but mappings need to be the same, because database has file paths for each video file. I created tool for myself that allows to move plex server from one pc to another and corrects file paths. I will try to make it available to everyone. Sent from my iPhone using Tapatalk
  24. Do not worry. Watched status is stored in sqlite database. The large size is due is to thumbs and maybe some cache files. Before you proceed you need to save this file \\TOWER\plex\Library\Application Support\Plex Media Server\Plug-in Support\Databases\com.plexapp.plugins.library It contains all information you need + reference to artwork. Remember! That for this to work your new installation has to be the same as before that includes libraries you created in Plex. For example I have Volume mapping /mnt --> Host path /mnt that means that for new installation I would have to use the same mappings. If I put /mnt --> /mnt/Media, Plex would fail to play my movies because it would not find it in that location. Now you can start from scratch. Install plex docker, re-create libraries in Plex Server. Plex would create all necessary metadata like posters and fanart. Stop plex docker when you are done. Go to this folder \\TOWER\plex\Library\Application Support\Plex Media Server\Plug-in Support\Databases\ replace file com.plexapp.plugins.library with the one you saved before Important! you have to delete 2 additional files. Plex will create new ones com.plexapp.dlna.db-shm com.plexapp.dlna.db-wal If you do not delete them Plex would not recognize replaced database. All artwork should be fine too. As I mentioned database holds only reference to them, not actual file path.