We all have our favorite ways to ensure that our databases are backed up. We may even have our favorite scripts, tools, and so on. Talking to someone that believes differently on this topic than you can be akin to discussion between religions or politics, with heated arguments seemingly making sense on both sides.
This is not what I want to discuss. I'm just going to share with you why I like what I like and why.
I love having individual backups for each job. This is more work for me, and is not automated. But I can create a single job for a single database and schedule it at an appropriate time. When this job fires off, i can ensure that it starts, performs its task, and completes, in a timely manner, without interruption from anything else. This takes a bit of doing and scheduling, but once done, I feel that my backups are the only thing going on at that moment, and can complete successfully. If an issue arises with a single backup, it can notify that it failed, and I know by the job name which database is having issues. Most likely the other databases are all fine, and not impeded by this single failure. This is harder to accomplish if you have shared jobs or maintenance plans, in my opinion.
So I spend the time creating individual jobs, determine the appropriate time, space these times out so that each can start and complete successfully. I also configure them to notify if a failure occurs. This coupled with other monitoring ensures that if a job fails, I know about it, soon. Knowing which one failed helps speed up the recovery.
So, that's how I like it for full backups. Single, measurable, simple. But for transaction logs, this is where I get lazy. I will often let all dbs fall to the same schedule (if possible) for tlog backups, and I will run a single job that cycles thru 'all' databases and performs the transaction log backup. I will schedule this appropriately on a db server, and let it fly.
On of the side effects to this method is that the global 'all' database approach to the tlog will fail if a certain database has not had its full backup performed already. A transaction log backup cannot happen until a full backup has occurred, and it will error. Since it cycles thru 'all' databases, this forces me to have taken all databases into account in the other method.
If I do my job, then all is well. For example, a new database needs to be stood up. I get the space needed for it, create it, size it, and so on. I then create a full backup job and find the time when it should execute. I then do nothing for tlogs, as they will automatically be backed up. Good to go.
But what about the time when someone else creates a database? Shouldn't they let the DBA know? Yes, they should, but sometimes, they do not. This is when the above scenario inadvertently helps me out. Developer X creates a new db, but doesn't perform any of the steps I usually do. No backup is created, no backup job is created, and so on. So, when the tlog job kicks off, it cycles thru 'all' databases, and encounters this one with no full backup, and freaks out, and fails and emails me.
This is when I know that a DB was created, but properly configured. In a way, its like the Database Server is tattling on the developers that created the DB without my knowledge. This lets me quietly go in, size the DB, configure it properly, create the appropriate maintenance jobs and processes for it, and get it going. I usually do this without fanfare, and simply get it setup correctly, and go back to my previous tasks. But now I know that this newly minted DB has had some TLC given to it, and will fit into my topology well.
No comments:
Post a Comment