Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Shoelayceberry the [Unlaced]

Pages: [1] 2 3 4 5 6 ... 652
Tech Heads / Re: AWS usage
« on: March 27, 2015, 07:55:12 PM »
Any experience with extending VPN to AWS EC2?  I'm interested in using cloud services for DR or hosting warm spares but I haven't had time to look into how we would put these into our VPN.  I'd like to sort this out for AWS or VMWare vCloud Air.  Anyone been there, done that?

Chemosh would almost certainly know that. Try pinging him...and then mentioning it here ;)

General Discussion / Re: Pillars of Eternity Out
« on: March 26, 2015, 09:21:27 PM »
chance of game-flu in the morning?

Tech Heads / Re: AWS usage
« on: March 26, 2015, 09:20:38 PM »
Gotcha. Didn't even know/see that mentioned.

Tech Heads / Re: AWS usage
« on: March 26, 2015, 04:23:49 PM »
Ephemeral? S3 is ephemeral? It looked like EBS would be the storage to use for big computes, true?

Tech Heads / Re: AWS usage
« on: March 26, 2015, 04:22:52 PM »
Never took these, I was self-taught, but they have a bunch of free stuff now:

Things I'd be conscious of:

a) AWS is ridiculously convenient, but remember that public cloud is all virtualized/commodity and you cannot maintain the same mindset as you do in your own datacenter.  Assume any server can and will go down with little notice and plan accordingly.
b) Learn the difference between EBS and ephemeral storage, learn it well.  Almost every AWS noob drops a bunch of data in ephemeral storage, reboots, then finds out all their shit is gone.
c) Pay close attention to expected service levels for each instance type.  Remember always that unless you pay for nosebleed tiers, you are sharing with lots of other users and your performance can suffer accordingly.  Some dudes cpu or iops usage spikes?  Your performance may crater.  Monitor and plan accordingly.

AWS is amazeballs but takes a different mindset to run on.  Most people take a while to understand this esp. when they dive in with a ton of stuff immediately.  I'd go slow before going to production.

Jackpot. Thanks Utumbro.

I see this as part of a strategy to fulfill what I don't think will work - downsizing to the point of insanity. I have a pretty diverse skill set, but I am horizontal, not vertical. With my current and future responsibilities, I'll never be the best Storage Admin. I'll never be the best Linux admin (especially for the Grid stuff we do). I'll never be the best Network Admin; I don't want to pursue anything past my CCNA at this point. But if they look to me to be fulfill all those roles, I'll have to offload complexity somewhere. I am thinking AWS for our complex computes, project Storage, and Archive storage. Then, internally, I think I can handle VMware for servers. I'll try to maintain VMware VDI, but at first sign that I won't have time, I'll vote it off the island. I've wanted to do this for years, though I like having it. I figure once I see the costs, I'm sure we could get a better deal from Microsoft, due to our charity pricing, if we decide to keep the capability. It might make sense to jump ship for Servers too, due to pricing changes for management services, aka System Center, and if "TheGrid" moves to the cloud, that's the majority of our Linux infrastructure.

My superiors put out our cost structure, for internal projects, that we will charge the Staff, in our usage model, starting now. I also sat in on an AWS in BioTech conference yesterday. AWS beats us in all categories except CPU hours. Of course our model pays for the Storage and Grid Administrators salary (partially, but majority), and the AWS figures would not account for that. It won't take long before some hotshot Scientist (who doesn't understand why Enterprise disks cost more than at BestBuy), or self-important Dev who thinks they know System Administration will see those figures and either plead for us to use them, or go rogue and try to use it themselves creating who knows collateral damage and IT being forced to pickup the pieces. So, knowing how to set it up, securely, will provide the pieces we need to get started, then I could work in more of a DevOps capacity with the Devs, to keep pricing low for computes, and manage a small team of internal helpdesk and server admins.

Tech Heads / AWS usage
« on: March 25, 2015, 11:26:22 PM »
What are the basics? Any tips? Any good videos for learning?

For those that are bored and want to read about what we are doing at Riot re: all this nonsense:

Well-written community analysis:

Holy shit. Amazing you guys are pulling in the bread to do this with microtransactions! Who'd a thought a big personal investment, way back when, would generate this kind of economic power.

Now you need to monetize that new revenue stream.  The first all-games ISP? :azn:

General Discussion / Re: Take a trip back in time
« on: March 23, 2015, 02:20:03 PM »
wow. fuck yeah dark tower.

Tech Heads / Re: Cisco Certs - Torrack or anyone
« on: March 22, 2015, 04:21:01 PM »
Fuck it. Think I am going to retry Routing and Switching. It all comes back to that and can only help you on every network related task. At the end of the day, it's all about the network.

I will hopefully finish up my IIS self-learning next week. Then I'll start my SQL self-learning with my "Month of Lunches" book, then spend all of April on CCNA. I figure one week on the basic CCENT stuff, and 3 weeks on the harder stuff before Cisco Live. Hopefully that will be enough, since I have technically studied it something like 3 times and passed it once over the years.

General Discussion / Re: Comments for RL Pic Posts - 2015 Thread
« on: March 21, 2015, 12:14:19 AM »
liquid weed

Tech Heads / Re: funny tech horror story
« on: March 20, 2015, 07:27:49 PM »
we at least invest more than that...for now.

I talked to my Director of IT a couple days ago, the guy who had me waste time on those unneeded scripts for the storage migration. Prototype old school IT nerd: all brains, knows a ton of shit, limited social relatability. He's plugging away on infrastructure so that if we shutdown his coast, it will be a bit more easy for us to pick up the pieces. In the best case, this will by no means be a cake walk, as I have said in the past that we are complicated beyond what our population is. Anyway, he broke the news to me that the extra position out here was denied. I guess it wasn't too shocking.

What was shocking is the fact that our new COO has sort of become the acting President (from scuttlebutt). Great news, as no one liked the President and she was the one shitting on IT budgets and employment. Unfortunately, that truck will keep going! As a Finance person (and possibly acting CFO), she believes that since her old company supported just shy of 500 ppl with 4 IT people, there is no reason why we can't do that. Luckily, she mentioned that plan to my VP and Director and they told her it was a fantasy unless they severely cut back on services provided. Obviously, at least with the Director, they are just trying to keep jobs; it couldn't possibly be from experience. So, she called not 1, but 2(!), separate Consulting Firms that when told what we support, told her they weren't sure how we were keeping up now, much less with a staff of 4, and confirmed there was no way. Fortunately for her, she knows better from her entire experience of 2 jobs in 4 years where she had implemented the same scenario, and is looking to implement the 4 person staff idea no matter what anyone says. No, her previous experience was not in Research, nor Biotech, and she has no idea what they supported anyway. And, since she was only at her 2 previous companies for less than 2 years each, she has no idea how those decisions trended out. Luckily she brought both of her Director level subordinates to each job, so they know how to wreck a business, too.

I am left in the middle of all this bullshit. I believe that we could certainly tighten up and rearrange IT budgets, to save more money. Some of that would entail business strategies of monitoring hardware across the board, on a consistent basis, to be able to prove when we need more/bigger hardware than just fattening up every 5 years. Yes, we should move as much stuff into the cloud as possible to cut infrastructure costs, and labor dollars associated with supporting a bloated system.

We have made changes this year that go a long way towards that. Our storage migration has setup a way to consistently bill PI's for their usage, and not for total IT purchases. Our grid has been setup for years, to be able to bill users for cpu time, and we are now implementing it, all starting now. When we uproot all that shit and move it here - btw, the fucking COO asked if we could just fly it all here to reduce downtime, like it's the travel that will take the longest - I am betting it will take 6 months to a year to get it all 100% functional and working as intended again; the shorter end if they continue to pay the East Coast people as Consultants to ensure it works. I am figuring I have until December 2016 before I am being looked at as responsible. I have until then to absorb what ever knowledge I can - hopefully I can squeeze a VCP-Datacenter and RHCE classes out of them before then. Assuming I re-up my CCNA, maybe it will time to look elsewhere. Maybe consulting. Who knows.

General Discussion / Re: John Quiggin Thinks the TPP Sucks
« on: March 18, 2015, 03:42:52 PM »
one locked and topics merged.

Tech Heads / Re: Cisco Certs - Torrack or anyone
« on: March 18, 2015, 12:31:13 PM »
Certainly, and in that order. As a systems guy, I hope to never have to know! In a perfect world I would have time to study all 3 (w/routing and switching), as there is certainly bleed-over into what I do and will only get more useful if we become one site.

Tech Heads / Re: Permissions Management
« on: March 16, 2015, 06:32:25 PM »
ScaleMatrix is the company we are looking at locally, in Whale's Vagina. They have Dark Fiber to our "donated CoLo" at UCSD/SDSC, who also supplies our access to Internet2 and National LambaRail until it went tits up. If we go single site, we may not need that anymore, unless Internet2 gets us faster access to AWS and the like, though it would give us 10Gb speeds between Datacenters and the office.

I'm hoping we keep VMware in-house, as that will complete my goals for datacenter knowledge and give me control of our VDI, so I can finally figure out if it's worth the blood, sweat, and tears it has cost us over the last 1.5 years and/or give me access to the dollar amounts spent, so I can compare to M$'s new pricing on VDI, using Hyper-V. We get mega discounts from M$ as a non-profit and I think we should leverage that to minimize costs on these things as much as possible, even if that means swapping completely to Hyper-V. No sense in being married to a technology these days, as long as we have feature parity to what we use.

Tech Heads / Re: Permissions Management
« on: March 15, 2015, 11:58:27 PM »
Sol I mean this in the nicest way, but the ppl you work with are fucking retarded and you putting up with this shit is enabling them.  Basically going back to my first observations that this situation is ridiculous

This was a pretty dumb ass situation, for certain. This is our Director of IT who caused this. I'm pretty sure it's because he can't let go. He is basically in charge of Unix, Storage, Grid, and LDAP, in addition to being Director. Oh, and since VMware lives on basically Linux, he is in charge of that too. He's a smart fucking dude, and could absolutely handle any 3 of those tasks, but he can't let go, and now there's no one to really dish shit off to - especially since his whole team will most likely be going away EOY, including himself. Then again, maybe that's why he pushed so hard - just to get something done before they lose their jobs. I dunno. 

Work is really strange these days. I went with the VP of IT to a possible CoLo in San Diego last week, for any shit that survives a move, but can't talk about it to my direct supervisors. We just purchased these Isilons and NetApp heads, and have 2 multi-node VMware clusters back east. WTF is going to happen to that? This is going to waste so much money, and require so much contract support to get done because 75% of IT knowledge is about to get dumped in the shitter. I sat in 2 meetings last month with a company that we are theoretically working with to host our Financial software, where our side had no timetable, and when the new Finance people sat in on the second meeting, they said we are probably not even going to keep using the software in 2016; we basically went to a meeting with a company we have a contract with while bringing nothing to the table then saying we should reschedule the meeting in 2 months because 1) Finance still hasn't decided what App they want to go with, 2) They hadn't decided whether they are just going to move the DC office, or shut it down completely. How the fuck do you engage in a contract with a company that you don't even know if you'll need?

Right now, I'm just doing what I always do. I've upgraded all my Mac infrastructure. I'm about 1-2 weeks away from a server migration to a new Symantec Endpoint Protection host (Server 2003 -> 2012, which will require a disaster recovery operation). After that, I'll clean and rebuild my Altiris infrastructure. If I get bored looking at Altiris again, I'll finish my Account Auditing scripts that I started and/or look into OSX security audits. If I get all that done before August, I'll be able to handle any other bullshit that occurs and will have a spectacular resume for next year, when I present to them my position and salary requirements when they realize how much they have doubled over and fucked their IT up, due a President who won't listen to IT, but will listen to the 3 new Finance people (CFO, Director, Director) who tell her how cheap it will be to put everything in the "magic cloud box." A perverse part of me wants to hang around long enough to see IT rebuilt and their careers ruined after this bullshit, but it will probably take 10 years to come full circle and I don't know if I can deal with the short term (3-5 years) it will take for the shit to hit the fan.

Tech Heads / Re: Cisco Certs - Torrack or anyone
« on: March 15, 2015, 04:21:15 PM »
what do you mean? I should ask them what their opinion of their easiest Associate level exam is?

Tech Heads / Re: Permissions Management
« on: March 15, 2015, 04:20:06 PM »
So, this turned out to be a bunch'o'bullshit and an extraordinary waste of my time. I don't know why we didn't use the tool Truff mentioned, but they were just using rsync, which is why (I think) that windows permissions were getting axed; since this was not going to a Windows Server, where we couldn't turn on Services for Unix.

Further, there may not have even been any true Windows permissions applied at all. I was given a task and told to add whatever Windows permissions would be needed. Since it was literally all users, I decided to add Domain Admins, with full control, and the user, as owner, with full control. A test run, that just touched every file/folder, took ~9 hours. Apparently, the process that applies the permissions can take several seconds to do so on folders, depending on number of items below; up to a couple minutes. So, whenever it hit folders at the top level it screeched to a halt. I took almost the entire first day to hit all 1023 top-level folders. The good news was that it would speed up as it got closer to the leaf nodes, but who knows how long that would take; I was estimating a week for completion and they only gave me an 18 hour windows initially, before I even ran any tests. Great.

But it gets better! Since I was not given access to the destination until my 18 hour windows, there was no way for me to know that as I applied my Windows permissions, I was hosing the Unix permissions. Not total FUBAR, but I think the ACL application fucked up the inheritance on the Unix side, and further set group permissions to 750 instead of 700 that they need for SSH to work. So, Unix was having to reapply permissions all over, and repeating on anyone that my script hadn't touched yet. Then, they realized the cause, 3 days in, and had me stop all scripts. So, without doing enough research, or asking me to do a real test to see what would happen, we caused many extra hours of work, in general between Unix and Windows Teams, and for me specifically, all the hours of development work was for naught, since it all got rolled back anyway. At some point, I will try to go back on a new user creation and see what processes apply what permissions to these home areas, but I don't get access to the filers themselves to review Unix permissions properly. I can only see them from my Domain Admin account from what Windows will show me.

It was a shitty couple of weeks, for sure.  :buck2:

Tech Heads / Cisco Certs - Torrack or anyone
« on: March 15, 2015, 03:19:59 PM »
If you've read any other posts from me lately, you know that I think our 2 site Org is probably going to lose one site by EOY; not mine. There are just two of us on my coast, and the other guy is a 3rd year SysAdmin I who isn't really pushing himself; he has made some strides in the last year, but I still need to double check on everything he does beyond the basics. So, we will literally lose all organizational knowledge, beyond what I know. I am fully capable of running/moving our Windows and Mac infrastructure, but we're a biotech research non-profit. We will need to know at least some Linux and Networking, depending on how much we move to the cloud or CoLo.

My CCNA runs out in June, but luckily I have a ticket to CiscoLIVE 2015 San Diego, so I have options for re-cert. While I used to do basic port activations and VLAN switches in our old building, I have yet to get access to our routers/switches here, since moving in Dec 2013. This was mostly because the 6 months we were supposed to get access to the building before the move, became 2 weeks, so things were a steaming pile'o'shit for a while, as we fire drilled our way to functionality. They also needed to do some RADIUS upgrades/changes, which are almost done. The point being that I haven't had daily knowledge/practice of routing/switching in over a year, and even though we have brand new Nexus equipment in our Closets, the knowledge that I did have was from old IOS and Catalyst switches.

With the knowledge that I inherited a (non-recognized/paid) Lead Windows Admin role, due to our Senior Admin quitting in August last year, I have old experience on old equipment, and there is currently a mad flight to push what we can to CoLo's or the cloud eating up time and effort in addition to the daily helpdesk work I am still required to do, I have to ask the question:

Which cert is the easiest to pass to maintain my CCNA: Voice, Wireless, R&S, or Security.

While I have the equivalent to R&S from my v1.1 cert, that was a fucking hard test, though I admit that's because I really am not a Network Engineer. I would say I am a CCENT+; not quite ready to run a full Cisco network, but I could get going in a hurry if I needed to. I have a little experience with older Call Manager and Unity for setting up phones and voicemail for all users, though just enough to do my tasks. I know a little wireless that a motivated home tech would know; frequencies, channels, etc., plus whatever Windows networking you could learn for the MCITP: Enterprise Admin; failed a couple tests by one question, but could go back now and pass for sure with some studying. I also am in charge of our Symantec Endpoint Protection services from a Security standpoint.

So, I know just enough to wreck a Network across multiple systems and though there is almost 4 full months to study, I don't have a lot of spare time at all - what would you recommend I shoot for?

Tech Heads / Re: Permissions Management
« on: February 18, 2015, 10:11:17 PM »
I assume that's the one, yes. Unfortunately, I am not on the team responsible for this. And, with the multi-hundred TBs of data being moved, they decided to do a big copy over many days/weeks so that they will only have to copy over any changes made in the last few days. Basically, I was told that all user home areas, which are presented over NFS and CIFS/SMB, will only retain the NFS/Unix permissions and I have to deal with the fallout on the Windows side - basically this was all decided without me and tough shit.

I made a breakthrough today on dumping the permissions to CSV with PowerShell Get-Acl. Basically, it appears as if, if you dump all folders in \\server\home (assume each subfolder is named after the username of who it belongs to) with Get-ChildItem and pipe it to Get-Acl:

1) It doesn't do it serially. I assume that maybe it builds a tree, since direct output of Get-ChildItem does not produce a sorted list that you would see when viewing the directory.

2) If it encounters an error - in this case a folder with an owner of root from the Unix side - the process stops without finishing. I had roughly ~1000 directories, but only saw output of ~800.

3) #2 really prevents you from seeing the problem, or rather where the failure occurs. So, instead, I used Get-ChildItem to dump a folder list to CSV. Then, using Excel, sorted the list; there's probably a way to do it in PowerShell, but I couldn't figure it out and had wasted too much time already. Finally, using Get-Content on that CSV, and piping that to ForEach-Object with a Get-Acl (while in the mounted share as a drive letter) in the loop, I created a serialized process that runs on each folder/directory, one at a time, and even though it still fails on the - it turns out - one folder, it continues to dump the rest of the permissions. So now I have all permissions backed up, save the one that I don't have access to, which I can tell the Unix guys to fix either before or after the migration.

Now, I just have to solve the reverse problem - given my CSV of folder names and permissions, apply all of those permissions to the individual folders correctly. If I can't figure that out by Noon tomorrow, I have already made progress on creating an ACL from scratch, that I can apply to all folders that will contain all Domain permissions needed (like Domain Admins and the Filer Admins), using Set-Acl. I would then create a separate script that would apply User ownership to each person who should own the folder. I have also thought about how I would do it with icacls.exe, if need be.

Tech Heads / Re: Permissions Management
« on: February 17, 2015, 10:14:10 PM »
unfortunately, no. All data has actually been mirrored, with continuing mirroring until the cut-off this weekend.

Tech Heads / Re: Hey pals what's wrong with my computer
« on: February 16, 2015, 03:18:51 PM »
To be safe, you could boot to some live CD (Windows Install Disk, or other) and use that to copy data over. Then use something like the "Ultimate Boot CD" to test the disk(s).

Tech Heads / Re: Deleted Chrome, Explorer, and Firefox
« on: February 16, 2015, 03:12:38 PM »
did you physically delete the EXE's?

Tech Heads / Permissions Management
« on: February 16, 2015, 03:11:49 PM »
Next up, we have our storage upgrade/migration going this month. In order to reduce some complexity, we are moving from older systems on NetApp, Isilon, and Sun, to just NetApp and Isilon. NetApp will run all performance required roles (grid and VMware infrastructure) and Isilon will handle the generic file server role.

Apparently, the move of our home areas to their new location on Isilon will destroy all NTFS permissions, while maintaining UNIX presmissions. It was handed off to me due to my growing skill with PowerShell. At first blush, I figured it would be easy to just back up, then reapply. Of the ~1000 home directories, ~200 errored out. I got piled under other work and still haven't figured out why it errored. At this point, I just need to solve the problem for the majority, then figure out why the the ~200 fail.

As I put more work on it, it seems like it may be harder than I expected. The migration happens next weekend whether I am ready or not. I knew of its existence before, but I ran across icacls.exe today.

The questions is, ever done this? Do you have a preference, and if so, why?

Tech Heads / Re: Exchange Admins?
« on: February 16, 2015, 02:51:21 PM »
Good points Atrocity.

Thanks Truffin for letting me know I wasn't the omega man of Exchange 2007.  ;)

Most issues seem to have been resolved for the short term. My Boss got DPM working on the stores, which basically gives you near-continual backups running all the time. Hopefully I never have to deal with it again, but if I do, I feel confident now that I can handle it.

Spamalot / Re: NPCF
« on: February 12, 2015, 12:22:33 AM »
nice post, count faggot

Tech Heads / Re: Exchange Admins?
« on: February 11, 2015, 07:41:46 PM »
Haven't found an answer to the circular logging question yet, but haven't looked to much either. I did see that you would be able to run your Exchange-aware backups while circular logging is enabled, so if it works, and if you go that route, remember to turn it off again before the next backup attempt.

As to our current process, I finally found the perfect article from an Exchange MVP that talks about full log drives:

pertinent piece:

The only log files that can be removed manually are log files that do not contain uncommitted data. If a log file that is needed by one of the stores (contains uncommitted data) is mistakenly erased the store becomes inconsistent and un-mountable(restore time-woohooo!!).

At this point you must be asking yourself - how can I know which log files have been committed (and should be earsed) and which have not been committed (and should not be erased)?

There are two answers to your question:

The Checkpoint file- as mentioned earlier the checkpoint file is used by the Exchange storage system to identify the point up to which log files have been committed to the stores. By using the ESEUTIL tool a system administrator can dump this information from the checkpoint file (Fig.4).
The exact command used to dump the file is eseutil /mk <path to checkpoint file>.
The only information we are interested in is the line that looks similar to the following one: Checkpoint: <0xF,208,E2> . On this line inside the brackets you have three pieces of information of which only the first interests us- 0xF. The first piece of information points to the log file which is borderline, everything before it can be removed and everything after it is needed to keep the databases consistent.
In our case 0xF actually means that the log file prefix is E00 and suffix is 0000F. So the filename is E000000F.log .
Keep in mind that by erasing log files you lose the ability to restore the system to a point in time so my advice is to simply move the files to a different location, another benefit that is gained from moving the files is that if anything goes wrong you can always copy all log files back to their original location.

Fig. 4: Dumping the checkpoint file
The database files- Each database file used by the store contains information about the log files it needs to become consistent. To glean this information from the databases a system administrator has to use ESEUTIL again. The command slightly changes: ESEUTIL /mh <path to database file>. The output can be overwhelming but we are interested in two lines:
State- this line specifies the database state which can be either:

- Clean Shutdown (fig. 5)- indicates that the database was dismounted in an orderly fashion or in other words it means that all outstanding transactions were committed to the database and it is consistent(no log files are needed to bring it to a consistent state).

Fig. 5: Clean Shutdown

-  Dirty Shutdown (fig. 6)- indicates that the database was shut down incorrectly - outstanding transaction from log files were not committed and the database is inconsistent. The next time the store is mounted the database files will need specific log files to become consistent-without them the store will not mount.

Fig. 6: Dirty Shutdown
Log Required - this lines specifies the log files needed to make the database file consistent.

- State: Clean Shutdown the Log Required line will show 0-0, meaning no log files are needed.

- State: Dirty Shutdown - the Log Required line will show a range of numbers such as 17-19. This is the range of log files needed to bring the database to a consistent state (17, 18, 19). To identify the log filenames the numbers have to be converted hexadecimal, in our case 11, 12, 13 adding the prefix to it(based on the log prefix for the storage group) E0000011.log, E0000012.log, E0000013.log.
Of the methods discussed I prefer using the checkpoint file. It is simpler and there is only one place to look as opposed to checking the database file where you have several places to check.

After identifying the committed log files they can be manually erased. As I said earlier I prefer copying them to a temporary location until I am sure that the storage system is working correctly. When you have enough space on the volume you can mount the stores.

Actually, in the same article they address circular logging:

Turn on Circular Logging for the storage group- Before you scream and shout I do understand that I am walking on thin ice here - but drastic situations call for drastic measures. By turning on the Circular Logging option for the storage group while the stores are still mounted the log files will be pruned and only five log files will remain. Turning on circular logging can be done by accessing the properties of the storage group and by checking the option called "Enable circular logging". Read and acknowledge the warning that appears and you are done - watch the log files disappear.

Advantages- The only advantage here is time. This is the fastest solution for the problem.

Disadvantages- Circular Logging will take away your ability to restore the system to the point in time at which the server crashed.

    Hoitz: What the hell is this?
    Gamble: It's my car. It's a Prius.
    Hoitz: I literally feel like I'm driving around in a vagina.

Spamalot / Re: Did everyone call Saul last night?
« on: February 10, 2015, 06:47:21 PM »

Tech Heads / Re: Restoring 8TB
« on: February 10, 2015, 11:27:01 AM »
Ouch. Sorry man.

Pages: [1] 2 3 4 5 6 ... 652