mkono87

Members
  • Posts

    197
  • Joined

  • Last visited

Posts posted by mkono87

  1. 5 hours ago, trurl said:

    Replace disk1 but keep original disk1 with its contents in case of problems.

     

    If I were to just go ahead and get two bigger drives one for parity and one for disk 1, whats the best method of attack? Replace Disk 1 and just the other drive as a 2nd parity then remove it the other parity when its completed rebuilding?

  2. 3 minutes ago, JorgeB said:

    Both are logged as disk problems, I would try to replace disk1 first since that one looks worse, then if anymore read errors on parity replace it as well.

    Unfortuantly I think this is a result of an unclean shutdown. If I replace disk 1 first, could I expect some lost files at this point?

  3. 54 minutes ago, sdub said:

    Looks like the root issue is in that '/root/.cache/borg' folder.  If you open a shell into borg and browse to that folder, does everyhing appear normal?  Could be that folder mapping is screwed up in your docker config.

    image.thumb.png.276c1a90fedea810abdf8ab66baa9964.png

     

    It mapped in a subfolder of the repo folder. Is that fine? The cache folder on the host wasnt there so I created it but now when I try to init it says there is something already there.

  4. Trying to initilize the local repo, I recieve a bunch of traceback errors, yet I still see files created in the folder.

    9b90dcb83a73:/# borg init --encryption=none /mnt/borg-repository/
    Local Exception
    Traceback (most recent call last):
      File "/usr/local/lib/python3.11/site-packages/borg/archiver.py", line 5324, in main
        exit_code = archiver.run(args)
                    ^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/archiver.py", line 5255, in run
        return set_ec(func(args))
                      ^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/archiver.py", line 183, in wrapper
        return method(self, args, repository=repository, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/archiver.py", line 290, in do_init
        with Cache(repository, key, manifest, warn_if_unencrypted=False):
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/cache.py", line 387, in __new__
        return local()
               ^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/cache.py", line 378, in local
        return LocalCache(repository=repository, key=key, manifest=manifest, path=path, sync=sync,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/cache.py", line 478, in __init__
        self.path = cache_dir(repository, path)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/cache.py", line 229, in cache_dir
        return path or os.path.join(get_cache_dir(), repository.id_str)
                                    ^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/helpers/fs.py", line 100, in get_cache_dir
        with SaveFile(cache_tag_fn, binary=True) as fd:
      File "/usr/local/lib/python3.11/site-packages/borg/platform/base.py", line 230, in __enter__
        self.tmp_fd, self.tmp_fname = mkstemp_mode(prefix=self.tmp_prefix, suffix='.tmp', dir=self.dir, mode=0o666)
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/helpers/fs.py", line 408, in mkstemp_mode
        return _mkstemp_inner(dir, prefix, suffix, flags, output_type, mode)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/borg/helpers/fs.py", line 364, in _mkstemp_inner
        fd = _os.open(file, flags, mode)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/borg/CACHEDIR.TAG-uryrraek.tmp'
    
    Platform: Linux 9b90dcb83a73 6.1.49-Unraid #1 SMP PREEMPT_DYNAMIC Wed Aug 30 09:42:35 PDT 2023 x86_64
    Linux: Unknown Linux
    Borg: 1.2.6  Python: CPython 3.11.5 msgpack: 1.0.5 fuse: llfuse 1.5.0 [pyfuse3,llfuse]
    PID: 65  CWD: /
    sys.argv: ['/usr/local/bin/borg', 'init', '--encryption=none', '/mnt/borg-repository/']
    SSH_ORIGINAL_COMMAND: None

     

  5. 11 hours ago, PeteL said:

     

    I tested URL and it works correctly. Try to schedule new recording using different show name & time.

    Do short test recording for 1-2 minutes using continuous method.

    If you can please post container log including info when job is registered, arguments and post processing.

    Also, please check recording log. You will find it on Jobs page. Each individual job name has it's own log. Click on the note icon left to the clock icon.

    It should have a start date & time, end date & time and ffmpeg exit code

     

    There could be different reasons for this issue.

    What options are you trying to use for scheduled job? 

    Please ensure that both config & media folders are set correctly in the container template.

    Also, check the appdata/rradio/temp folder and see are there any incomplete recorded files.

    It is possible that ffnpeg stops recording because of incorrect folder permissions.

     

    It seems to be working now. I have removed the schedules instead of editing them. Looked like worked after doing a min test.

  6. 10 hours ago, PeteL said:

    If you could please share the URL you are trying to record from. 

    I need little more details what happens when you trying to record.

     

    When we schedule repeating recording, by default each job can be triggered only once during a day. This is to prevent creating duplicates.

    In this particular example job 0-4 was already triggered and executed. Hence, any subsequent attempt is aborted. I do not recommend triggering jobs directly from a dashboard, unless you want to test if recording will work. Then, you will have to create new scheduled job and go directly to dashboard and trigger job. Go to container log and see if the job was registered, ffmpeg command line arguments and post processing.

     

    To get to dashboard

     

    http://IPADDRESS:PORT/dashboard

     

    Recordings are performed in temp folder

     

    /appdata/rradio/temp

     

    Please check this folder. It is possible that ffmpeg stopped recording at some point

     

     

      I have seen the the ffmpeg line arguments before but nothing seem to record regardless. Il remove the scheduled recordings and try again. This is was only radio I have tried. https://bayshore-ais.leanstream.co/CISOFM-MP3?args=tunein_01

     

     

  7. 2023-07-01 08:51:13 [INFO]  (Executing job 0-4)
    2023-07-01 08:51:13 [INFO]  (Job registered: True.)
    2023-07-01 08:51:13 [INFO]  (Aborting)

     

    I have a heck of a time trying to record shows.  I cant see to find anymore errors. The radio station plays just fine in this app.
     

  8. This is what I have come up with now. 

    i5-13500 14 Core

    Gigabyte Z690 Aorus Elite AX DDR4 LGA1700

    Kingston KC3000 1TB NVMe M.2

    TeamGroup T-Force Vulcan 16GB(2x8) X2

    Deepcool AK400 CPU cooler

    Fractal Design Meshify 2 Case

     

    I originally though of a Rosewill 4U case, but I know I want to update my desktop pc in the near future as well. I have considered taking a crack at virtualiziing it and passing through my gpu.

     

    I also noticed my local shop has these KINGSTON NV2 1TB Gen 4x4 NVMe M.2 on sale, and curious if I should pick up 2 for a pool for something.

    • Upvote 1
  9. 19 hours ago, Lolight said:

    Not quite sure of what kind of performance you have in mind.

     

    Generally speaking, Intel is considered to be a more stable and efficient platform (C-states implementation).

    Included in the chip Intel's iGPUs offer great transcoding performance while consuming very little electricity.

    I can not see a compelling reason for AMD hardware in a Plex setup.

    Any reason to use an i7 over an i5 besides more cores? Was looking at an i5-13500.

  10. 7 hours ago, Lolight said:

    What are your reasons for a need to switch to AMD and a stand-alone GPU?

    Instead of upgrading to a more current Intel CPU with iGPU (Quicksync)?

    Honestly, no real reason. I figured its a much better price/performace ratio. Is there anything you would suggest?

  11. Finally, time to upgrade my little beast. Currently running an i3-6100 with a mini itx asrock server board as it was originally a freenas system. Im wanting to upgrade it with an am4 socket but I dont think I want to shell the money for a server board. Not sure its really necessary.

     

    Here is what I am thinking and would love some advice.

     

    *Edited to Intel system*

     

    i5-13500 14 Core

    Gigabyte Z690 Aorus Elite AX DDR4 LGA1700

    Kingston KC3000 1TB NVMe M.2

    TeamGroup T-Force Vulcan 16GB(2x8) X2

    Deepcool AK400 CPU cooler

    Fractal Design Meshify 2 Case

     

     

    I was using quicksync for Plex transcoding. Not sure if I need to buy a cheap GPU for this build. I may be upgrading my desktop pc too which has a gtx 1070 in it so might be an option. Any ideas on a cheap gpu for transcoding would be nice

     

    Im using the current power supply, 4x Red Drives, Cache SSD. The Samsung will become the cache.