Simple enough. You need the knowledge of how your systems is being accessed, coupled with the information these two procs provide and a little time, and you too can make informed decisions about your indexing needs.
I then coupled this information with a script from Pinal Dave (blog | twitter) that looks at unused indexes. I tweeked both these scripts from Pinal and Michelle to match the results I was looking for. Small tweeks, mind you. The result were 2 queries I could use to look into a particular database, see in a glance all missing and unused indexes. Using Kimberly's proc to review the table for existing indexes and combining this with the knowledge I had of our systems, I made choices and either added or removed indexes at will. I felt powerful. The power of the DBA coursing thru my veins...
< side note >
I want to take this chance to thank these folks for their work, their sharing and their knowledge. It's nice to stand on their shoulders, grab solutions that work, implement them, tweek them to my needs and system and topology, and devise a workable solution that fits my needs.
Thank You!!
< /side note >
After digging into several databases in several servers, I ended up on a particular database last night, just before I went home. While I was looking at this db one of our database developers called me with a complaint about a slow running report he was working on. After looking into it, locating the spid and digging around it to see what I could determine, we determined that it was indeed processing, it was indeed working, just very slowly. It had been running for an hour and a half so far, when he called me. My first thought was that I had caused the slowness with the index work I was performing. After disclosing this to him, and determining that it was not the case, we kept looking. At one point I decided to give up, remembering that I was just about to dig into this database in particular and look at its missing and unused indexes. I informed him we should do that now, instead of looking more.
He killed the report and we watched it complete. He could tell that it had processes somewhat, but was not finished yet. I then looked for missing indexes, and with his help, identified likely candidates that would affect his query in particular. We added a few indexes to these tables along with 2 other tables. I think in total we added 6 indexes only to this database. We then removed several indexes that were unused. After this work I suggested he kick off the report. Almost immediately we received an alert from SQLMonitor indicating that we had high processor utilization. This lasted for 3 minutes and levels returned to normal. The report was finished in 4 minutes. This report, remember, had been approaching 2 hours previously, and now completed in 4 minutes. We executed it again, and monitored again. Same results. Yeah!! Database Developer is happy. I am happy. I go home.
The next day I realize that this is valuable information and could help me historically, if I could collect it. I set out to do so, and if you are still reading, do not be disappointed. I succeeded, or I wouldn't be writing this blog about it, I'd still be working on it...
I converted the 2 queries I was using into powershell variables, and then I execute this query against each database I want too, against each database server I choose too. I collect this data, and store it in a staging table in my DatabaseMonitoring database. After it has all been processed I call a proc that sticks this collected data from the staging table into a historical table, adding an Identity ID and a datetime stamp. Now I know when the execution occurred, along with the information about the missing and unused index, per server, per database, per table, etc.
I did not get fancy with the powershell and do any looping or anything like that. The DBs that I look into are hard coded in the script. Someday, maybe, I can get back to the script and make it purty. But now, I'm done. The code is put into a job that will fire off periodically and collect this information historically. Now I just have to look at it periodically as well.
I want to take some time soon to pull this accumulated data into a report and add it to my 'Monitor Everything' report that sends me daily email statuses for all these processes I have added to our monitoring topology. Soon. After a few projects are complete, I will return.
1 comment:
wow. that was a super sexy post. I feel all illuminated and chilled. it just made me want to index something :)
loves from your baby sista
Post a Comment