Anyone ever used git-for-large-objects?

Discussion in 'Tech Heads' started by Agrul, Jun 4, 2019.

  1. Agrul

    Agrul TZT Neckbeard Lord

    Post Count:
    45,756
    https://git-lfs.github.com/

    If so, any tips?

    I've been maintaining anything I give a shit about with a Java implementation of git, using an aws bucket as a headless remote repo store. Some of that stuff I give a shit about (arrowdick) consists of pretty big files. So far it's still peanuts ($0.80 / month), but eventually could be quite pricey, so looking into ways I can still use git (git's various text-based tools don't do shit for 3D art, but I love the ability to easily roll back to almost arbitrary version changes, and the consolidated workflow it provides for working locally and then switching to the laptop while wandering around) but not pay Amazon for storing a bazillion TBs of data a few years from now.

    Rlly not even sure what glfs does. Are better compression options available just based on file size? Wouldn't think so. Maybe am wrong. Halp.
     
  2. Utumno

    Utumno Administrator Staff Member

    Post Count:
    41,052
    No idea but I'm interested in anything u learn if u try it or solve ur dillema in other ways.
     
  3. Agrul

    Agrul TZT Neckbeard Lord

    Post Count:
    45,756
    from what i could tell, git-lfs's primary hack is to just let you flag certain objects as not really version controlled in the true git sense (to wit: "Git LFS handles large files by storing references to the file in the repository, but not the actual file itself. "), which is not what i wanted

    git's default compression is pretty good, though, and s3 is cheap. after 4 years of fucking around maintaining unity projects, songs, blender 3d projects, etc all in a few s3 buckets, i'm still only paying $1/month. and my glacier utilization could be a lot higher than it is, i think
     
  4. Agrul

    Agrul TZT Neckbeard Lord

    Post Count:
    45,756
    which is to say this didnt lead to anything better for my use case. i'm just gonna stick with my current workflow and more aggressively move shit to glacier or delete more crap if s3 costs become an issue
     
  5. Agrul

    Agrul TZT Neckbeard Lord

    Post Count:
    45,756
    although revisiting this now, maybe a way forward would be:

    1. get accustomed to using one of those mount-as-s3-bucket-so-it-looks-like-local-storage APIs/libraries
    2. tell git LFS to do its thing, i.e. store only references to where the large object lives
    3. profit? assuming you trust s3 to backup the large files sufficiently, of course

    without # 1 i presume this doesnt work very well, though, as it seems like you're basically treating large files as non-git objects and it's your job to make sure they are locally available in the location where they are expected