Well okay, maybe not all that bad but they really should be reviewed before letting them run rampant.  That is what Andy Yun b|t is asking us to talk about in this month’s T-SQL Tuesday blog party.    Microsoft has spent large sums of money and many man hours of development resources to make SQL Server easier to install, upgrade and manage.  The big problem with this is that not everyone has the same hardware or requirements.

While you CAN install SQL Server by mounting an ISO or inserting a DVD and hit “next, next, next, install” what you get is an instance that will not be optimal.

This is when knowing what to change and what not to change pays off.  Here are items that leap to my mind as requiring review to ensure the default is acceptable:  (They are presented in no particular order, and number is only for presentation.)

1. MAXDOP – Maximum Degree of Parallelism symmetric multiprocessing (SMP) computer, a non-uniform memory access (NUMA) computer, or hyperthreading-enabled processors all play a part in determining this number.  You need to review your hardware configuration to ensure your setting follows Microsoft’s recommendations here.

2. Max Memory – Yes SQL Server is built to dynamically allocate and de-allocate memory, but out of the box there are no limitations.  By setting maximum memory on your instance you can alleviate memory pressure on your system.  This is extremely important if you have multiple instances on the same hardware as Microsoft does not recommend allowing SQL Server to dynamically allocate memory in this type of configuration.

3. Cost Threshold for Parallelism – Interesting as there is no hard and fast rule of what this setting should be.  Out of the box, this setting is five which might be just fine for your workload however, you need to review this and make modifications to support what your needs are.

4. Database File Location – We all know that logs and data should be in separate locations.  If you accept the defaults this is not going to happen.  Your MDF and LDF will be stored in the same location.

5. TempDB – How many should you have?  Some recommend that there should be one file for each “processor” on your system.  That might have been a good suggestion prior to today’s multi-core, multi-proc systems.  In my production environment I have 8 TempDB files split across 2 different storage locations.  One more thing to remember is make your file sizes the same for all locations.

6. Model Database Settings – This is where you can actually get a big bang for your buck.  Every database you create  is based off the model database.  Change the default auto grow from 1 MB of the data file to something more appropriate, change the log file growth from 10% to a fixed amount, and don’t forget to change your database recovery model.  Once you’ve set this on your model database, every database created on that instance will have all your default settings.

These are just a few items that I’ve dealt with over the years.  I’m sure there are others and am very interested in reading others about them in this blog party.

Thank you Andy for hosting this month.  Power to the non-defaults.

Tagged on:         

2 thoughts on “Defaults, Defaults, Defaults everywhere and maybe that is bad. T-SQL tuesday #68

Leave a Reply

Your email address will not be published. Required fields are marked *