Jump to content

Energen

Members
  • Posts

    516
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Energen

  1. Ah, yes, hmm.. I did not consider calculating parity at the same time. I'm down to the last disk that I'll be able to move and format before having to move data off the first full disks, so perhaps then I'll disable parity and see what the speeds look like. In my current operation I'm averaging around ~50MB/s so it will be easy to tell how much overhead parity accounts for, and then perhaps I can try some other stuff with the multi-thread moving without parity and see.. if I can figure out a easy multithread function. Thanks for the feedback. -- Edit, After removing the parity drive, I had burst speeds of about 120MB/s, then for moving small files (photos and the like) it averaged around 85MB/s, fluctuating between 70-90MB/s. -- Edit2, So that's not advisable to do... just the parity rebuild time is worse.
  2. Am I doing this wrong or what don't I understand here.... ? (which is a lot) I'm playing with a Gotify docker container for push notifications. I'm playing with this letsencrypt docker for SSL certificates. Is it possible/how do I use the SSL certs from the letscrypt container in the Gotify container? The Gotify config file has an area for SSL ssl: enabled: false # if https should be enabled redirecttohttps: true # redirect to https if site is accessed by http listenaddr: "" # the address to bind on, leave empty to bind on all addresses port: 443 # the https port certfile: # the cert file (leave empty when using letsencrypt) certkey: # the cert key (leave empty when using letsencrypt) letsencrypt: enabled: false # if the certificate should be requested from letsencrypt accepttos: false # if you accept the tos from letsencrypt cache: data/certs # the directory of the cache from letsencrypt But this seems to require that letsencrypt is running within the same docker container? I've tried just copying the files from appdata/letsencrypt to a folder in appdata/gotify but the files "weren't found", so not sure where gotify was looking for them. The main config file is found in appdata/gotify/config, tried the certs there also. Gotify doesn't have a support thread here so I'll try in the letsencrypt thread, since I need letsencrypt files Thanks for any assistance.
  3. I just started messing with this to move data in order to reformat drives.. so far so good.. Cool utility... but I wish it were faster. Not sure if this has been mentioned in any of the previous 61 pages ... forgive me. Would it be possible/desired/wanted to change/modify/fork/clone/beta-test this to something that could do file transfers in a multi-threaded way? I tried to do some searching around and apparently rsync itself doesn't offer any multi-thread capability, but in conjunction with 'parallel' and/or a couple other methods I saw it's at least possible to do, if it can be figured out. <-- I can't figure it out, not a linux guy, too old to learn anything new One site I found with some code on it I had working a little bit, seemed ok, but not sure if I was doing it correctly. So without getting into the nitty gritty details about, what's the possibility or interest level of creating a multi-threaded tool? When you're moving TB's and TB's of data, single thread operations are very painful. Thanks!
  4. LOL! That's great. Don't you feel accomplished now that you've done it? Haha, thanks!
  5. You really don't give people a lot of credit eh? You had the same reservations with the cleanup appdata plugin too yet you worked around the hesitations. If someone selects a bunch of files in linux/windows and presses the big red X they know what they are doing, therefore I personally don't think that it would be very confusing. You have one button that says install and one button that says delete. I think people can figure that one out. But since I'm not smart enough to modify the plugin without giving myself a headache, I'll have to go with what you say
  6. Could I make a feature request? If possible, it would be really nice to have a mass remove function from the Previous Apps tab to allow for, you guessed it, removing multiple apps at once from the list. There's a mass install button for multiple apps but clearing out the list has to be done one by one. I've got 75 apps to go through and clean up and clicking the X button on each one and then going back to where I left off is excruciatingly painful. Thanks for the consideration.
  7. I just tried this as well, while just setting up some new options and exclusions.. With compression OFF, the command is Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/appdata_backup/[email protected]/CA_backup.tar' With compression ON, the command is Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/appdata_backup/[email protected]/CA_backup.tar.gz' So the only difference there is the file extension, but the resulting file size is different. Uncompressed, 720MB, Compressed, 200MB. I'm thinking what dubbly meant is that he wants no 'archive' file, but that's probably not the best idea. Before I excluded Plex/radarr/sonarr image related stuff my backup archive had close to a million files. You do not want to back up files like that outside of an archive. There's one small thing to change though, here's the log for usb backup: Apr 28 22:40:22 UNRAID CA Backup/Restore: Backing up USB Flash drive config folder to in backup.php change line 127 From: logger("Backing up USB Flash drive config folder to $usbDestination"); backupLog("Backing up USB Flash Drive"); To: logger("Backing up USB Flash drive config folder to {$backupOptions['usbDestination']}"); backupLog("Backing up USB Flash Drive"); I thought my usb backup was not working but it looks like it is working properly.
  8. This is a brand new, never used, APC UPS, in original packaging. Shipping to the US only, USPS or UPS. Shipping weight is 14lbs7oz. A link for full product info is here... https://www.apc.com/shop/us/en/products/APC-Back-UPS-650/P-BE650G1 I currently have this item on eBay but will give preference to my fellow Unraid'ers. I will have more 'stuff' to come as I replace my current stuff with my new stuff, but starting with this for now.
  9. So I was going back and forth between a single and multiple UPS... I ended up ordering a single UPS but I'm curious about the implementation of what you said. I ordered a Cyberpower UPS and have the RMCARD205 network card in my cart but really don't want to pay $160 just for a network card. How could I have a Windows pc and my Unraid server get the shutdown from the UPS simultaneously? I don't really want any batch scripts sending commands and stuff like that...
  10. For my new setup that I'm planning, I'm contemplating playing around with VLAN's .. to separate IoT devices from my main network devices.. to kind of play around with network security and such a little bit.. not a major concern, just playing around. I've looked at some smart / managed switches but they seem either too complicated or more advanced than I really need/want. What's the best solution? I was thinking of running different sets of routers and such but that might create it's own set of problems and management headaches. Maybe a simple smart switch is what I want? I looked at a TP-Link 24 port for under $100 but a main complaint about it was that the management GUI could be accessed from any VLAN, so somewhat of a security risk I guess.. not sure that the risk would really be a concern to to many people? I've survived without VLAN's for many years but figured I'd just start playing with them, so what should I do here?
  11. Hey all, I have a grand idea of rebuilding my main pc and my unraid server into a server chassis layout and don't fully know what I would need or wait to accomplish this task. Why I want to do this is irrelevant, so don't ask 😛 Obviously I know I need the server rack itself, and I'm considering some of the racks from sysracks.com that I feel I could live with (it will be an open, always visible rack, so want it to look at least somewhat attractive). My idea is to have (2) 4U chassis cases for each build. I have not looked at any new ones yet, since the previous Rosewill cases have been discontinued since I last thought about doing this. My main pc does not need a lot of expandability but will hopefully be future proof with a full size graphics card, etc.. so figured 4U would be sufficient. My unraid server currently has 10 total drives, so hopefully I can find a 4U to fit those, with possible expansion for the future, I really don't want to have to buy new, larger drives, in order to fit. So now it comes down to........ what else do I need? I'm planning on (2) Cyberpower 1U UPS's, one for each pc. That's my current setup although in standard tower form. Each UPS supplies the required USB cable to each pc for shutdown control when needed. Is there a better way of accomplishing that task? I'll most likely have at least one PDU for other accessories (cable modem, router, etc), connected to one of the UPS's.. as I currently have. Maybe a PDU for each UPS just for extra plugs and to split the load across the two UPS's.... any better way of planning this? I'm not sure if I'll need or want an ethernet patch panel. I do not have any cables coming in that would need be dealt with. My entire network exists from the cable modem and router/switch. Perhaps one-off temporary ethernet connections would be useful/easier to plug into a front-facing patch panel vs plugging into a hidden switch but it's not a major requirement... thoughts on what I should do here? I guess a 1U switch that replaces my current switch would be sufficient, and no patch panel needed. And then a shelf or two for misc stuff like cable modem and router. Am I missing anything else? Something I haven't thought of? Any thoughts / comments / suggestions will be greatly appreciated.
  12. For whatever it's worth, the SMART attributes said the drive failed .. "SMART overall-health:Failed" and when I tried to preclear the drive for removal it failed preclear/erase also... I've already opened up a warranty claim to RMA it just to be safe, but I won't use the replacement for anything critical.
  13. So I haven't had any more issues since removing this cache drive.... it seems it was the root of all problems! Lost my VMs since I couldn't move over the files but there was nothing essential there, and dockers reinstalled with no major issues. My cache drive was a 7-8 month old Mushkin SSD.... I guess it didn't work out too well. I'll eventually look to replace that. Thanks for the help.
  14. Ok will try that first. Thanks for the help. I'll try to move any files off the cache drive and remove it from the array and go from there.
  15. What about the read only shares though? The cache is trying to write to the shares, yet I can't find anywhere that they could be set to read only, or any reason why they would have been set to read only. Appdata is read only also, somehow. Those are my two biggest problems.
  16. I've been experiencing a number of problems within the last week or so, that all seemingly started out of nowhere.,,, last version upgrade maybe? I've had the GUI/server essentially crash for some unknown reason which was fine after a reboot, but I rebooted again last night to try and resolve some issues and ended up in a boot loop because the USB drive was not detected, or something. Got that resolved after a hard reset. I had a number of warnings about a drive or two with read errors yet all drives pass all checks. Currently my biggest problem is that some shares are read only even though read only was never set on any shares, and again started out of nowhere. I ran Docker Safe New Perms to go through everything and reset any permissions but I still have read only shares. I have a number of "some or all files are unprotected" on the Shares list because of these read only issues. The Mover gets jammed up in the log "UNRAID move: move: create_parent: /mnt/disk8/appdata error: Read-only file system" Fix Common Problems is currently giving me these two errors: Unable to write to cache Drive mounted read-only or completely full. Unable to write to Docker Image Docker Image either full or corrupted. What the hell is going on here? Last week when I was having read errors I put the array into maintenance mode and scanned all the drives, no errors reported, array restarted fine and didn't have any problems (known anyways) until now. My system log has a bunch of bad looking stuff in it .. is this all included in the diagnostics zip if it would help to figure anything out? Aug 21 23:20:07 UNRAID dhcpcd[1795]: br0: failed to renew DHCP, rebinding Aug 21 23:30:45 UNRAID kernel: BTRFS error (device sdl1): parent transid verify failed on 857849856 wanted 13396518 found 13393366 Aug 21 23:30:45 UNRAID kernel: BTRFS: error (device sdl1) in btrfs_run_delayed_refs:2935: errno=-5 IO failure Aug 21 23:30:45 UNRAID kernel: BTRFS info (device sdl1): forced readonly Aug 21 23:30:45 UNRAID kernel: print_req_error: I/O error, dev loop2, sector 0 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 1, rd 0, flush 1, corrupt 0, gen 0 Aug 21 23:30:45 UNRAID kernel: BTRFS warning (device loop2): chunk 13631488 missing 1 devices, max tolerance is 0 for writeable mount Aug 21 23:30:45 UNRAID kernel: BTRFS: error (device loop2) in write_all_supers:3716: errno=-5 IO failure (errors while submitting device barriers.) Aug 21 23:30:45 UNRAID kernel: BTRFS info (device loop2): forced readonly Aug 21 23:30:45 UNRAID kernel: BTRFS: error (device loop2) in btrfs_sync_log:3168: errno=-5 IO failure Aug 21 23:30:45 UNRAID kernel: loop: Write error at byte offset 17977344, length 4096. Aug 21 23:30:45 UNRAID kernel: print_req_error: I/O error, dev loop2, sector 35112 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 2, rd 0, flush 1, corrupt 0, gen 0 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device loop2): pending csums is 12288 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device sdl1): pending csums is 4096 Aug 21 23:30:47 UNRAID kernel: BTRFS warning (device sdl1): csum failed root 5 ino 4631484 off 131072 csum 0x1079e3d3 expected csum 0x73901347 mirror 1 Aug 21 23:30:47 UNRAID kernel: BTRFS warning (device sdl1): csum failed root 5 ino 4631484 off 262144 csum 0xafa74aad expected csum 0xfa3d3f16 mirror 1 So one thing at a time, how do I fix the read only issues? Thanks.
  17. I was interested in playing around with any generation of one to see if I could make my own digital picture frame that performs in a better way than my Nixplay digital frame does, but even if I managed to get it all together and mount an LCD in a not-ugly frame of some type the software would be my issue..... I would want, ideally, something that would display photos from an unraid share in a random order, or from google photos. Maybe in some sort of configurable way, since there are a LOT of photos. Not sure what else I would use a pi for.. that's just something that I've thought about for a while. I'll probably never do it.
  18. I had the same problem and tried a bunch of things, none of which worked. Then I gave up. I was bored this morning after waking up at 2AM and decided to give it another try. Last time the most success I had was to use the NativePC instead of the VMWare image but still had an issue trying to boot it and gave up on that as well. You will need some linux tools, I had a Debian VM installed already so I used that. Download the DietPi Native PC (BIOS/CSM) image. If you try to create an Unraid VM with that image you will eventually get into a loop of not being able to download updates because of no available free space (that's the hint). This time, I did some more research and tried a few more things.. and seem to have it working. I used the info provided here to resize the image https://fatmin.com/2016/12/20/how-to-resize-a-qcow2-image-and-filesystem-with-virt-resize/ Use a couple tools to show you info about the image and then to resize it.. qemu-img info DietPi_NativePC_BIOS-x86_64-Stretch.img displays the disk size as 602M.. not enough to be usable. I don't know what the minimum size should be, I added entirely too much at 30gigs but I'm just testing at this point so I don't care. qemu-img resize DietPi_NativePC_BIOS-x86_64-Stretch.img +30G cp DietPi_NativePC_BIOS-x86_64-Stretch.img DietPi_NativePC_BIOS-x86_64-Stretch-orig.img sudo virt-resize -expand /dev/sda1 DietPi_NativePC_BIOS-x86_64-Stretch-orig.img DietPi_NativePC_BIOS-x86_64-Stretch.img I had to use sudo since I wasn't logged in as root Now you can take that new, resized DietPi img and use it as your unraid vm hard drive and install DietPi. To save you some time and effort, here's a fresh 4GB image that can be used for Unraid. DietPi_NativePC-BIOS-x86_64-Stretch-4GB-UNRAID.7z - 121.7 MB
  19. I installed this docker for a couple minutes last night and just didn't seem to get it working correctly, but I jumped in fast and didn't do much research, so that's on me. Having things run as a docker might be easier and more convenient but what I ended up doing was installing a Debian VM and installing pi-hole that way, and it seems to be working pretty good so far. I wonder, though, if somewhere in the 25 pages of this thread has anyone asked about DNS fallback if Unraid/docker/pihole is down? Is it just a matter of setting a secondary DNS server on your router if it can't communicate with pi-hole? Now on to looking for something else to do with the server......
  20. This tool has been pretty useful to me so I'll continue to use it. Thanks for your efforts with keeping it updated and always useful. Question: Would it be possible to modify it so that even after doing all those steps, detect/determine if the action is going to delete the entire array (or just do something very bad) and display a severe warning and have to explicitly allow the action? Might it be as easy as checking the path selected for deleting and making sure the path is a a folder and not the whole share?
  21. Ah I see, so if I set them both to /mnt/user or /mnt/user/Media, as previously mentioned, that would be considered the same mount points? Doing that quickly then makes my movie file paths /movies/Movies/<name>? So when I change the existing movie file path it doesn't register the existing file. All in all it seems to break everything. If sonarr works fine, why does radarr work so poorly? The radarr devs shouldn't make such drastic changes to simple things. Not sure where to go from here, maybe start over completely. ---------------- Edit -- this completely makes no sense to me. Maybe that's my problem, but why is this so complicated? If I change radarr's /movies path to /mnt/user/Media/Movies the movie path is correct in radarr... but even though the media file is there it's not detected, says missing. Also, with this mapping (and no /downloads) I can't import from any other folder. If I map a /downloads and a /movies then it copies files instead of moving them quickly. What is the proper way to configure this? No way seems to be the correct way.
  22. I just started playing with radarr, and have everything working ok-enough... except for imports. My radarr is also copying files rather than moving them and can't figure out why. Everything is mapped on the same share so it "shouldn't" be an issue of mount points. And, the mounts are essentially the same as they are with sonarr, which works perfectly ok. I've tried toggling the use hardlinks setting in radarr with no difference. What am I missing here? Also, any ideas on why radarr is not detecting some movie files and keeps them as missing? The files are there but they are not scanned.
  23. Excellent, thanks for the confirmation. My next task will be to create a cache pool. Just for fun. What I didn't mention was that I actually put 2 SSD's in but one of them was from a PS4 and apparently I can't get it recognized by the system to format it... either that or I forgot to plug something in. Problem for another day when I want to tinker with it.
  24. Added a 500gb SSD cache drive after roughly a year or so of having my server running.. didn't need to add the cache drive, was just something to do to scratch my build-something itch.. simple SATA card and drive addition cured me for a little bit. So using the various threads and guides that I found I'm pretty sure I have everything set to the way it should be, for example appdata, domains, and system shares are set to Prefer cache disk, and other shares are set to Yes for using the cache disk. All seems well, haven't noticed anything going wrong. But now what? Am I supposed to make any other changes to existing dockers/vms? How about going forward? I didn't specifically see any 'how-to usage' of the cache drive mentioned anywhere but I might have missed it. In essence what I am asking is for any existing or new dockers and/or vms do I use the cache disk as the location for anything or do I continue to use the normal filesystem mappings and the usage of the cache drive is handled by unraid on it's own? Do I write a (or change an existing) VM's hard drive to /mnt/cache/domains/SomeVM and then it's mirrored to the array or do I still map it to /mnt/user/domains/SomeVM and the cache drive is used automatically? Kind of like a symbolic link. Similarly if I'm sending data to a share that has use cache disk = yes, for example a downloads folder, do I write to /mnt/cache/downloads or to /mnt/user/downloads? I'm sure this is covered somewhere but I couldn't find anything on this topic. Point me in the right direction. Thanks for indulging my noob question.
×
×
  • Create New...