Veeam to AWS Glacier?

Discussion in 'Tech Heads' started by cruoris, Jun 16, 2017.

  1. cruoris

    cruoris TZT Regular

    Post Count:
    619
    I am in the process of implementing Veeam. Currently I backup to disk then to LTO4 tapes. I am considering using AWS Storage Gateway to store my Veeam backups on VTL in Glacier. I was wondering if anyone here is currently doing this? I am still uncertain if my Veeam B&R licensing will support this or if I need Cloud Connect
     
  2. Utumno

    Utumno Administrator Staff Member

    Post Count:
    42,820
    I've no first-hand experience with this, but if the price looks good it seems like a worthwhile move. I really do not miss dealing w/tape backups.
     
  3. Solayce

    Solayce Would you like some making **** BERSERKER!!! Staff Member

    Post Count:
    21,660
    not here either. considering looking into Hyper-V, plus all 2016 servers for my enterprise apps to leverage Azure backup. IT wold be good to know your experience here.
     
  4. Truffin Skoaldiak

    Truffin Skoaldiak Location: Your Mom's House, sniffing her panties!

    Post Count:
    891
    Thank goodness, i remembered my password to log into here......
    We use Veeam almost exclusively at the moment, in a service provider role. We are vetting various backup products, and pitting them against one another not only for ease of use, functionality, etc. but also to help drive down some pricing on them too. One of those things is direct to cloud - in our RFI, it was stated it could do it, we just haven't tested, nor proven it can, and what the gotcha are, if any.

    Currently, how we get backups offsite, is we do a local back up copy, retention is usually between 3-7 days. This is for quick recovery of data, or entire machines. We also have a backup copy job that is a continuous run - when a new backup file is detected, it takes the backup, copies it to a repository in one of our 6 data centers, where we will have various retention periods depending on what is needed/sold. We also serve as a cloud connect repository, so other companies can use us, independently, to send their backups to "the cloud"

    For longer retention periods, usually situated around legal and medical industries, we are vetting the direct to cloud, that is supposedly a function of Veeam. Veeam 10 is set for 4th quarter of this year, which brings a lot of new features and functions, and we have not gotten information back yet if that version supports direct to cloud, or if the functionality is built into the currently 9.5 version, and we are just unaware.

    Hopefully, in the next month or two, i can answer your question directly, as to if it supports direct to cloud, and actually works. You could also investigate using backup copy jobs, and send those to a cloud connect provider, to get your backups offsite.
     
  5. Solayce

    Solayce Would you like some making **** BERSERKER!!! Staff Member

    Post Count:
    21,660
    We're using NetApp to backend our virtuals and Isilon to host our file shares (SMB and NFS) and our scratch space for HPC Grid. We have a DataDomain, to do compression, and a tape library to get stuff offsite. We're currently using NetBackup but it's costing us a fortune. Those guys started building some ZFS servers to maybe leverage some disk to disk stuff, but then we just got a very sweet deal on renewing Isilon nodes, collapsing 20 -> ~10, and a new DataDomain. Luckily I'm not directly responsible for that.

    They're looking to get off NetBackup. I can handle our Exchange and SQL Servers with SystemCenter, but would potentially need a product to do VM backups (both VMware and Hyper-V). I assume you're using Veeam for all of that?
     
  6. Utumno

    Utumno Administrator Staff Member

    Post Count:
    42,820
    datadomain, netapp, isilon - man you guys just chucking money around like there's no tomorrow
     
  7. Utumno

    Utumno Administrator Staff Member

    Post Count:
    42,820
    that's not criticism by the way, just how i see a lot of these companies pricing themselves out of existence in the near future
     
  8. Solayce

    Solayce Would you like some making **** BERSERKER!!! Staff Member

    Post Count:
    21,660
    to be fair, the prices are RE-DIC-U-LOUS. We're getting those extremely cheap though - like 50% off. That said, now that I report directly to the CTO - hoping to move to Director of IT in the next couple years - I'm going to completely move us to the home-made hyperconverged infrastructure we need for performance, but at a sustainable price. The Unix guys will get us to an all disk-to-disk ZFS backup system, too. Replacing a node, or two, of commodity servers every year is waaaay less of a capitol layout than a 250-500K storage appliance every 5. The software defined world is really going to help us out in the next 5 years if we start moving now.
     
  9. Utumno

    Utumno Administrator Staff Member

    Post Count:
    42,820
    home made? you're not jumping on the openstack bandwagon are you?

    if i had to do a fresh setup right now, i'd probably take a look at nutanix honestly
     
  10. Solayce

    Solayce Would you like some making **** BERSERKER!!! Staff Member

    Post Count:
    21,660
    I've definitely received cold calls/emails recently about, or from Nutanix. Problem is, they're fucking expensive. No. I'm looking to do it with all Windows. You can layer Hyper-V on top Storage Spaces Direct, to create your own HCI stack, where you can scale out or up as needed. If it works out, maybe we would tackle OpenStack in the future, but we might just be skipping something like that and going to containers for our Linux needs (non-Grid).
     
  11. Truffin Skoaldiak

    Truffin Skoaldiak Location: Your Mom's House, sniffing her panties!

    Post Count:
    891
    We have stayed away from NetApp and Isilon, however, we have roughly 5 physical (maxed) data domains that are "ours", several virtual, and a good 8-10 that are HaaS. And yeah, they are expensive, but we do have an Avamar grid (being phased out, and again, expensive) that justifies the data domains. Avamar and data domains is a match made in heaven.

    Earlier i mentioned the backup RFI stuff so things are in transition between vendors/technologies, as well as a couple of acquisitions to get fully roped in. So right now, SQL is a mix between Avamar and Veeam - all going to Veeam. Exchange is currently Appasure for our (old) hosted Exchange, just because Veeam is unable to keep up with it, and the "stun" from the snapshot tasks can cause some end user headaches if noticed. Now in Exchange 2013, with it being active-active, Veeam can easily knock it out, and the stun isn't an issue if you have your jobs setup properly. So with our old hosted Exchange environment going away, so will Appasure....thankfully.

    And you mentioned Hyperconverged - good on you for realizing its potential. Another check mark for "homemade" - whatever you do, do not go VXRail. That is the worst thing I have ever had the pleasure to work with. It was sold, and with no knowledge whatsoever of it (at the time) it was a PITA to get setup properly. To top it off, it wasn't just a single VXRail in a common configuration. This particular one was a stretched Layer 2 to one of our other datacenters, with a stretched VSAN. This particular client is a very prominent business, reliant on extreme high availability, and with the most custom SLA due to it....we start getting charged thousands an hour after 20 mins of downtime.....but i digress.

    Homemade is the VXRail, without the "rails" so to speak, and is a true blessing. We have gone through various pricing models, partner discounts, etc. It is either super micro servers built to our liking, or Dell FX2 vSAN ready nodes. Which, as of this typing, I am setting up the POC for, for the 3rd time. The setup itself is so simple, the issue is the hardware. So far though, the FX2 model has been the best experience, and with being able to setup automation and rapid deployment of them, and the setup, it has been a breeze to do manually as well as from an automation standpoint. If you have the capabilities now to start some POC work for various models, and setups, I would suggest to start now, as it is evolving fast. No cost to your business, and if you can get the eyes and ears of key folks, you might be able to making a more solid argument based on facts learned, so when you are ready to move to a hyperconverged setup, most of the upfront work is done, and you are ready to start hitting production environments.
     
    Solayce likes this.
  12. Solayce

    Solayce Would you like some making **** BERSERKER!!! Staff Member

    Post Count:
    21,660
    Thanks Truf. Given the goal of a software defined datacenter, all commodity servers in simple replaceable units," I was hoping to do it with more "off-the-shelf" style servers, like a Dell 730xd/740xd. What am I missing here? I didn't think to check for RDMA storage controllers...is that why you have to use the FX2's? We're primarily thinking of SuperMicro, so maybe we can talk to them about the build.
     
  13. Truffin Skoaldiak

    Truffin Skoaldiak Location: Your Mom's House, sniffing her panties!

    Post Count:
    891
    You aren't missing anything that I am acutely aware of. Right now we are running the FX2 in a lab POC, and probably will go with the next gen build. With the VMware and Dell partnership (and ours with EMC, now Dell), you will probably start to see these things roll out, pretty much configured, and almost a plug and play type setup. At least, that is the intended goal anyway. There are lost of helpful stuff out there, to ensure everything conforms to the HCL.

    https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan - lots of helpful links, top and bottom, of the compatibility guide. vSAN sizing, ready node configurator (link below), etc.

    http://vsanreadynode.vmware.com/RN/RN - ready node configurator. This is what I was referring to above. All ready comes down to the hardware, and whom you want to go with. This basically builds out the BOM for what you are wanting, and bridges the gap with whatever hardware vendor you choose.

    https://labs.vmware.com/flings/vsan-hardware-compatibility-list-checker - if you have never heard of "flings" you should check it out. It is basically developers within vmWare, who have developed various things on their own, or a team of some folks. Some pretty cool stuff starts out as a fling (the HTML5 web client, for example), and may some day see production.

    With the way the market is going, and ease of automation, vendors will have little room to compete against one another for vSAN ready nodes. But, right now, i think you are on the right track checking out super micro. They are leaps and bounds cheaper from what we have found, and are still a competitor in our bake offs. Since we are a service provider, the licensing is nothing since it is already being paid for, the technology is there and the expertise in house to build it, and automate it, so it all comes down to the hardware, and will be the case for most any company now.
     
    Solayce likes this.
  14. Solayce

    Solayce Would you like some making **** BERSERKER!!! Staff Member

    Post Count:
    21,660
    Niiiiice. Thanks!
     
  15. Solayce

    Solayce Would you like some making **** BERSERKER!!! Staff Member

    Post Count:
    21,660
    @Utumno My CTO actually got convinced to go rogue and schedule a meet with Nutanix without my knowledge. Glad he did. I wanted to get a couple more of our guys into a new meeting then hear prices, but they're way beyond what I thought of them from 2+ year old knowledge. If it's been awhile for anyone here, they're looking to move into the Software Defined Data Center space, as the "Datacenter OS," and become a hardware agnostic (to a degree) software platform.

    So, while roll-your-own sounds fun, If they come in at a price acceptable less than VMware, it may not be a bad choice for our long game. We'll see. If they're as expensive as I have heard them to be, we did get some ideas that we could apply without them.
     
  16. Utumno

    Utumno Administrator Staff Member

    Post Count:
    42,820
    Yeah, honestly with the markets shifting as fast as they are (and Amazon kicking the living shit out of everything), these companies and their pricing may shift from quarter to quarter. It's probably worth keeping track of them if someone has time to do so (god knows I don't anymore - I used to enjoy keeping tabs on virtualization and storage companies and their offerings).
     
  17. vanayr

    vanayr NPCF

    Post Count:
    21
    Just speaking as someone who has a $600,000+ AWS bill, believe me, it's not all it's cracked up to be. If your workloads are not designed for the cloud, it's not going to help. At all... Thankfully the compute and storage vendors are finally stepping up.
     
  18. Utumno

    Utumno Administrator Staff Member

    Post Count:
    42,820
    This is a common refrain I see over and over. I still think AWS is all it's cracked up to be, even more so - but for the right workload/mentality/companies. People who are forklifting over legacy shit, as-is, are often going to get raped in terms of cost.
     
  19. Truffin Skoaldiak

    Truffin Skoaldiak Location: Your Mom's House, sniffing her panties!

    Post Count:
    891
    Agreed with the above statements - I have TONS of workloads in AWS and Azure. The key is knowing what needs to go there, from a solutions/architecture aspect in the sales cycle, so expectations can be set, and so the engineering staff knows where the workload needs to be. Not every workload is designed to live in Public Cloud, just as every workload may not be assumed great to be in Private Cloud, which is where "hybrid" solutions come in; on-site, private, and public.

    This is also where portals and automation pieces reign supreme. We don't currently auto-magically move workloads from on-prem, to private, or public. Only from private (our managed/hosted cloud) in one datacenter geo, to another seamlessly. Or, cloud bursting - both into private or public, from on-prem even. But, never auto-magically moving workloads into public without an evaluation of need, proper expectations, etc. unless it is cloud bursting.
     
    Last edited: Oct 25, 2017