Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Shoelayceberry the [Unlaced]

Pages: [1] 2 3 4 5 6 ... 652
Tech Heads / Re: Permissions Management
« on: February 18, 2015, 10:11:17 PM »
I assume that's the one, yes. Unfortunately, I am not on the team responsible for this. And, with the multi-hundred TBs of data being moved, they decided to do a big copy over many days/weeks so that they will only have to copy over any changes made in the last few days. Basically, I was told that all user home areas, which are presented over NFS and CIFS/SMB, will only retain the NFS/Unix permissions and I have to deal with the fallout on the Windows side - basically this was all decided without me and tough shit.

I made a breakthrough today on dumping the permissions to CSV with PowerShell Get-Acl. Basically, it appears as if, if you dump all folders in \\server\home (assume each subfolder is named after the username of who it belongs to) with Get-ChildItem and pipe it to Get-Acl:

1) It doesn't do it serially. I assume that maybe it builds a tree, since direct output of Get-ChildItem does not produce a sorted list that you would see when viewing the directory.

2) If it encounters an error - in this case a folder with an owner of root from the Unix side - the process stops without finishing. I had roughly ~1000 directories, but only saw output of ~800.

3) #2 really prevents you from seeing the problem, or rather where the failure occurs. So, instead, I used Get-ChildItem to dump a folder list to CSV. Then, using Excel, sorted the list; there's probably a way to do it in PowerShell, but I couldn't figure it out and had wasted too much time already. Finally, using Get-Content on that CSV, and piping that to ForEach-Object with a Get-Acl (while in the mounted share as a drive letter) in the loop, I created a serialized process that runs on each folder/directory, one at a time, and even though it still fails on the - it turns out - one folder, it continues to dump the rest of the permissions. So now I have all permissions backed up, save the one that I don't have access to, which I can tell the Unix guys to fix either before or after the migration.

Now, I just have to solve the reverse problem - given my CSV of folder names and permissions, apply all of those permissions to the individual folders correctly. If I can't figure that out by Noon tomorrow, I have already made progress on creating an ACL from scratch, that I can apply to all folders that will contain all Domain permissions needed (like Domain Admins and the Filer Admins), using Set-Acl. I would then create a separate script that would apply User ownership to each person who should own the folder. I have also thought about how I would do it with icacls.exe, if need be.

Tech Heads / Re: Permissions Management
« on: February 17, 2015, 10:14:10 PM »
unfortunately, no. All data has actually been mirrored, with continuing mirroring until the cut-off this weekend.

Tech Heads / Re: Hey pals what's wrong with my computer
« on: February 16, 2015, 03:18:51 PM »
To be safe, you could boot to some live CD (Windows Install Disk, or other) and use that to copy data over. Then use something like the "Ultimate Boot CD" to test the disk(s).

Tech Heads / Re: Deleted Chrome, Explorer, and Firefox
« on: February 16, 2015, 03:12:38 PM »
did you physically delete the EXE's?

Tech Heads / Permissions Management
« on: February 16, 2015, 03:11:49 PM »
Next up, we have our storage upgrade/migration going this month. In order to reduce some complexity, we are moving from older systems on NetApp, Isilon, and Sun, to just NetApp and Isilon. NetApp will run all performance required roles (grid and VMware infrastructure) and Isilon will handle the generic file server role.

Apparently, the move of our home areas to their new location on Isilon will destroy all NTFS permissions, while maintaining UNIX presmissions. It was handed off to me due to my growing skill with PowerShell. At first blush, I figured it would be easy to just back up, then reapply. Of the ~1000 home directories, ~200 errored out. I got piled under other work and still haven't figured out why it errored. At this point, I just need to solve the problem for the majority, then figure out why the the ~200 fail.

As I put more work on it, it seems like it may be harder than I expected. The migration happens next weekend whether I am ready or not. I knew of its existence before, but I ran across icacls.exe today.

The questions is, ever done this? Do you have a preference, and if so, why?

Tech Heads / Re: Exchange Admins?
« on: February 16, 2015, 02:51:21 PM »
Good points Atrocity.

Thanks Truffin for letting me know I wasn't the omega man of Exchange 2007.  ;)

Most issues seem to have been resolved for the short term. My Boss got DPM working on the stores, which basically gives you near-continual backups running all the time. Hopefully I never have to deal with it again, but if I do, I feel confident now that I can handle it.

Spamalot / Re: NPCF
« on: February 12, 2015, 12:22:33 AM »
nice post, count faggot

Tech Heads / Re: Exchange Admins?
« on: February 11, 2015, 07:41:46 PM »
Haven't found an answer to the circular logging question yet, but haven't looked to much either. I did see that you would be able to run your Exchange-aware backups while circular logging is enabled, so if it works, and if you go that route, remember to turn it off again before the next backup attempt.

As to our current process, I finally found the perfect article from an Exchange MVP that talks about full log drives:

pertinent piece:

The only log files that can be removed manually are log files that do not contain uncommitted data. If a log file that is needed by one of the stores (contains uncommitted data) is mistakenly erased the store becomes inconsistent and un-mountable(restore time-woohooo!!).

At this point you must be asking yourself - how can I know which log files have been committed (and should be earsed) and which have not been committed (and should not be erased)?

There are two answers to your question:

The Checkpoint file- as mentioned earlier the checkpoint file is used by the Exchange storage system to identify the point up to which log files have been committed to the stores. By using the ESEUTIL tool a system administrator can dump this information from the checkpoint file (Fig.4).
The exact command used to dump the file is eseutil /mk <path to checkpoint file>.
The only information we are interested in is the line that looks similar to the following one: Checkpoint: <0xF,208,E2> . On this line inside the brackets you have three pieces of information of which only the first interests us- 0xF. The first piece of information points to the log file which is borderline, everything before it can be removed and everything after it is needed to keep the databases consistent.
In our case 0xF actually means that the log file prefix is E00 and suffix is 0000F. So the filename is E000000F.log .
Keep in mind that by erasing log files you lose the ability to restore the system to a point in time so my advice is to simply move the files to a different location, another benefit that is gained from moving the files is that if anything goes wrong you can always copy all log files back to their original location.

Fig. 4: Dumping the checkpoint file
The database files- Each database file used by the store contains information about the log files it needs to become consistent. To glean this information from the databases a system administrator has to use ESEUTIL again. The command slightly changes: ESEUTIL /mh <path to database file>. The output can be overwhelming but we are interested in two lines:
State- this line specifies the database state which can be either:

- Clean Shutdown (fig. 5)- indicates that the database was dismounted in an orderly fashion or in other words it means that all outstanding transactions were committed to the database and it is consistent(no log files are needed to bring it to a consistent state).

Fig. 5: Clean Shutdown

-  Dirty Shutdown (fig. 6)- indicates that the database was shut down incorrectly - outstanding transaction from log files were not committed and the database is inconsistent. The next time the store is mounted the database files will need specific log files to become consistent-without them the store will not mount.

Fig. 6: Dirty Shutdown
Log Required - this lines specifies the log files needed to make the database file consistent.

- State: Clean Shutdown the Log Required line will show 0-0, meaning no log files are needed.

- State: Dirty Shutdown - the Log Required line will show a range of numbers such as 17-19. This is the range of log files needed to bring the database to a consistent state (17, 18, 19). To identify the log filenames the numbers have to be converted hexadecimal, in our case 11, 12, 13 adding the prefix to it(based on the log prefix for the storage group) E0000011.log, E0000012.log, E0000013.log.
Of the methods discussed I prefer using the checkpoint file. It is simpler and there is only one place to look as opposed to checking the database file where you have several places to check.

After identifying the committed log files they can be manually erased. As I said earlier I prefer copying them to a temporary location until I am sure that the storage system is working correctly. When you have enough space on the volume you can mount the stores.

Actually, in the same article they address circular logging:

Turn on Circular Logging for the storage group- Before you scream and shout I do understand that I am walking on thin ice here - but drastic situations call for drastic measures. By turning on the Circular Logging option for the storage group while the stores are still mounted the log files will be pruned and only five log files will remain. Turning on circular logging can be done by accessing the properties of the storage group and by checking the option called "Enable circular logging". Read and acknowledge the warning that appears and you are done - watch the log files disappear.

Advantages- The only advantage here is time. This is the fastest solution for the problem.

Disadvantages- Circular Logging will take away your ability to restore the system to the point in time at which the server crashed.

    Hoitz: What the hell is this?
    Gamble: It's my car. It's a Prius.
    Hoitz: I literally feel like I'm driving around in a vagina.

Spamalot / Re: Did everyone call Saul last night?
« on: February 10, 2015, 06:47:21 PM »

Tech Heads / Re: Restoring 8TB
« on: February 10, 2015, 11:27:01 AM »
Ouch. Sorry man.

Tech Heads / Re: Exchange Admins?
« on: February 10, 2015, 11:24:48 AM »
It's been a couple of years since I've had to deal with Exchange and it was 2010 not 2007.  We had a similar problem where the log files would build up and eventually choke the disk on the server.  Exchange won't let you clear the logs unless they are backed up.  However you can change the log to circular logging and then back and it will gobble up all of the logs and free up the disk space.  Obviously this isn't a perfect solution but if your server is crashing due to disk issues this trick can save your ass.

We migrated everyone off of Exchange to Google Apps and never looked back.  Google makes some great tools for automating the migration and it wen't pretty smoothly for us.  I've talked with some friends who migrated to 365 and it sounds like msft doesn't provide nearly as many helpful tools.  They mentioned a third party application or service that they went with that made it pretty seamless, I'll try to get the link for you.  Exchange is a pretty awesome server but email is so damn critical and there's a lot that can go wrong it just doesn't make sense to run it in house.

Good luck.

I had seen a few hits about circular logging temporarily. It wouldn't overwrite anything right? The logs would get as big as they needed, but the circle would close after being committed? Anyway, good to have some positive thoughts that way. Thanks a lot. I appreciate it.

General Discussion / Re: SOE is now Daybreak Game Company
« on: February 02, 2015, 11:55:16 PM »

What abut Smed? And EQN?

Spamalot / Re: Movies that I've watched recently
« on: February 02, 2015, 11:33:17 PM »
Just watched Jodorowsky's Dune. A documentary about the guys attempt to put Dune into movie form in the 70's. I've read the first trilogy but don't think it's necessary in order to watch this.

good? I've wanted to see it too.

I want to play EverQuest again. I just have a Mac laptop and I remember setting up Boot Camp / a partition is a huge pain in the ass.

if you have the free space, just need a thumb drive. pretty easy.

General Discussion / Re: Theory posted to XKCD
« on: January 30, 2015, 12:19:50 AM »
good on you for having the balls to do it. failure is the crucible to greatness.

Spamalot / Re: my body hurts
« on: January 29, 2015, 05:20:29 PM »
makes sense

General Discussion / Re: 17 year old girl killed by cops in Texas
« on: January 29, 2015, 04:54:58 PM »
So sad man.  Shit went on for like 15 minutes.  None of them had mace or a tazer?  None of them could have gotten to one of those either in their car right outside or from the actual police station?  Shrug.

read a quick brief that said there was a tazer attempt. Didn't affect her.

Spamalot / Re: my body hurts
« on: January 29, 2015, 04:47:18 PM »
Did not know about the MS.

Spamalot / Re: JET interview in 1 hour
« on: January 29, 2015, 04:40:50 PM »
good luck

Spamalot / Re: That's enough internet for today.
« on: January 29, 2015, 12:10:47 AM »
between this and the woman in post partum, on meds, slitting the throats of her 2yo and twin 6mo's to "quiet them down" pretty much began a spiral down today.

Spamalot / Re: New Ghostbusters cast
« on: January 28, 2015, 11:47:08 PM »
Egon died in 2014 kids

General Discussion / Re: Pleenq -- My New Startup
« on: January 28, 2015, 09:22:04 AM »
when does the new season start? Loved that show.

General Discussion / Re: Pleenq -- My New Startup
« on: January 27, 2015, 02:45:40 PM »
this is actually a monetization model

I picked that up. I don't think I understood how you or they make money, but it's a damn powerful idea. You really could make a metric fuckton of money if you can grab some decent market early. The big boys would have to buy you out then and I doubt it would be cheap. Grats man.

General Discussion / Re: Fais Vase
« on: January 26, 2015, 11:56:17 PM »

Spamalot / Re: I know you guys think I'm crazy
« on: January 22, 2015, 03:38:16 PM »
It will look like gibberish to anyone who doesn't know their shit.

A quote like that would most likely result in being dismissed, especially if anyone were to know that you have no formal training in either Math or Physics.

That doesn't mean it's not true, but it would be hard to take anyone seriously under those circumstances. I think Reddit may be a good idea, if nothing else, than to see if you really understand what you think you do under questioning. I also haven't looked at all the final edits, due to needing time to digest technical/theory writing.

I believe reboots to be used in series, both movie and TV, whereas remake would be used for stand-alones and therefore mostly/always in movies.

Tech Heads / Re: Exchange Admins?
« on: January 21, 2015, 09:10:00 PM »
Absolutely. Now that I understand most of whats going on, it's that itch you can't scratch. This will get solved this this year, either by storage upgrades, allowing our backups to process more/faster, or we are about to stand up a temporary SystemCenter DPM server as a secondary backup solution to keep log files truncated, or a migration to 365.

It's just that I find Failover Cluster info for generic services as the most google hits, which is fine, but I need solutions specifically for what we have and you will only see this in Exchange 2003 or 2007, as 2010/2013 moved to DAGs, which are different. And, we may already be doing it correctly, but I was wondering that if we caught the logs filling up, but didn't have time to run a backup, what is the best practice way to trim those log files without potentially losing data.

What I have been doing with my documentation is linking info to all relevant source material, so no one has to do this again, and making a quick and dirty "cheat sheet" for how to handle/troubleshoot any issues, particularly DB issues, since that is what occurs with log file truncation problems.

Tech Heads / Re: Exchange Admins?
« on: January 21, 2015, 07:52:51 PM »
sorry for any misunderstanding, as we have jumped around a bit, but the main issue is proper Failover Cluster management, for the exchange db cluster, to clear log files, either before (hopefully) or after a failure (fill up of log drive). The procedure we use, mentioned above, seems to create more work. Maybe that's the only way you can do it, but I was hoping there was a step or two we are missing, to prevent reseeding.

Tech Heads / Re: Exchange Admins?
« on: January 21, 2015, 06:04:48 PM »
Yes I have.

We merged and had too many IT people, for sure. I bit the bullet, since I wasn't native to the home office city anyway, and took the San Diego gig before I got cut.

In the last 10 years our main profit making machine disappeared. Gene Sequencer technology has progressed faster than Moore's Law. I have a shirt, from just joining the Institute, celebrating our first millionth lane sequenced; it took ~3 years IIRC. We were one of the largest privately owned gene sequencing centers in the world, 12 rows of 10 sequencers, online 24/7/365, worth $100 million. That was probably a team of 25ish lab techs, just running the sequencers, and probably another 25 other lab tech support in the building. We had a warehouse style building supporting all this and one Admin positioned over there every day, that just supported that building.

We helped track down the "white powder" being mailed to Congress after 9/11. We've been called in to assist with the Bird Flu that came through a few years ago. Numerous other incidents. Now, however, there's a single machine just moved into my lab area, that does a million lanes in 2 days; the price tag is somewhere between $500k-$1 million.

We're near the center of the Big Data revolution and having to radically change how we do business. Automation will be key to how we do business in the future, and I have been doing my part over the last 5 years to get us there, but with the knowledge that we may be downsizing, they are hesitant to hire more support staff to keep up with what we have. Only they have asked for more items to come online, mostly due the President (who has never gotten along with IT, and is fairly inept with it) and her brother (who is almost just as bad). As things wind down, there will no longer be a place for the brother and he will bail. Hopefully she will soon thereafter; she's a very good scientist and works with the Human Microbiome project a lot. Once their out the door, there will be more support. I am hoping to outlast that, but before I go, if I go, I will make sure that CEO knows why. He may even relocate me elsewhere, as he is pretty good at taking care of people who take care of him.

Pages: [1] 2 3 4 5 6 ... 652