Friday, August 22, 2014

Maintenace plan for SQL Databases

Creating a maintenance Plan for SQL Server databases is a responsibility of a SQL DBA. The reason a maintenance plan is necessary is to avoid bottlenecks that may occur because of lack of regular maintenance. 

Here are some of the regular tasks a SQL Maintenance Plan can address:

Check Database Integrity
The Check Database Integrity task performs internal consistency checks of the data and index pages within the database.

Shrink Database log files
The Shrink Database task reduces the disk space consumed by the database and log files by removing empty data and log pages.

Reorganize Indexes
The Reorganize Index task defragments and compacts clustered and non-clustered indexes on tables and views. This will improve index-scanning performance.

Rebuild indexes
The Rebuild task reorganizes data on the data and index pages by rebuilding indexes. This improves performance of index scans and seeks. This task also optimizes the distribution of data and free space on the index pages, allowing faster future growth.

Update staitstics
The Update Statistics task ensures the query optimizer has up-to-date information about the distribution of data values in the tables. This allows the optimizer to make better judgments about data access strategies.

Clean History
The History Cleanup task deletes historical data about Backup and Restore, SQL Server Agent, and Maintenance Plan operations.  This wizard allows you to specify the type and age of the data to be deleted.

Backup database
The Back Up Database (Full) task allows you to specify the source databases, destination files or tapes, and overwrite options for a full backup.

Maintenance clean up

The Maintenance Cleanup task removes files left over from executing a maintenance plan.

Launch SQL Management Studio
Expand the "Management" 

 Click on "Next"


 Choose a "Name" for the maintenance Plan. This is differentiate it from all other maintenance plans available on the SQL instance.
You could choose a single operation to run all the tasks or break them down into segments to be run separately.
Click on "Change" to choose an appropriate time to run the job.


This step is very easy. Choose your time and date you want the maintenance job to run.


At this point, you will choose all the task you want the maintenance plan to encapsulate.


Integrity test will be carried out on the databases selected.


Here you choose the databases. You can also select all databases.


You can also include indexes by checking the "include indexes" option.


Reorganizing indexes will be done on all the database selected. In this stage, you select the databases for this task. You can also choose all databases.


Here you specify the database where the indexes will be rebuilt.


At this point you define the statistics to update.


Choose the frequency and files to clean.


Choose the mode of maintenance cleanup.


Getting feedback from the maintenance process is essential because it will indicate success or failure of the process. You can choose to write a report to a text file or email report directly to you.


At this point, you can point to a location for storing the report.


The next screen will display a summary of all the tasks to be completed


Once the maintenance plan has been successfully created, a "Success" message appears in the next box.


The next step is to verify the Maintenance Plan.
1. Expand the "Management" menu
2. Expand the "Maintenance Plan".
You should see the maintenance plan in the dropdown list as shown in the image below.


Thanks for reading.

Tuesday, August 12, 2014

De-fragmenting indexes in SQL Server

One of the key tasks I have done to improve performance on SQL Server database is defragmenting indexes in a SQL database.

Indexes are very crucial for improving database performance because they make database searches efficient and faster.

Index problems

What are some of the issues indexes may have in a database?
There are some issues usually associated with indexes. Here are some of them:

Indexes with high fragmentation ratio: Indexes with fragmentation will cause a huge performance issue on the database.

Too many indexes on a table: Though indexes are desirable, too much of them may be a problem in a database. Index redundancy can occur from too many indexes on a table. Also, it may cause confusion as to which index to use for sql execution. Also, update statements will take a very long time to complete.

Hypothetical indexes: These are indexes created during a SQL tuning operation to improve performance. They may not be needed after the query is executed.

Unnecessary index: creating an index on a very small table is not recommended because it amounts to misuse of resources. 

Databases identified with one or more tables, with indexes that may require update statistics: If statistics of an index is not updated, there could be a performance issue.

There are foreign keys with no supporting indexes: It is best practice to create an index to support foreign keys.

Indexes have been identified with an index key larger than the recommended size (900 bytes): Index keys should not be greater than 900 bytes.

High number of Full Scans versus Index Searches: This is a clear case of lack of index on a table.


Rebuilding an index

You can use the GUI tool to rebuild an index.



Retrieving Fragmentation info on all indexes

--Use this script to get all the information about all the tables and indexes in a database. You should use this list to prepare a collation of all indexes to defrag in a database. 
You can also use this list to set a threshold of defragmentation based on the trend.

-- Declare database
use Patientdatadb
go

-- Show fragmentation per table
SELECT dbschemas.[name] as 'Schema',
dbtables.[name] as 'Table',
dbindexes.[name] as 'Index',
indexstats.avg_fragmentation_in_percent,
indexstats.page_count
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats
INNER JOIN sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id]
INNER JOIN sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id]
INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id]
AND indexstats.index_id = dbindexes.index_id
WHERE indexstats.database_id = DB_ID()
ORDER BY indexstats.avg_fragmentation_in_percent desc

Result should list all the indexes on all the tables in each database along with the fragmented percentage. 



DeFragmenting all indexes in a database

To defragment all indexes in a database that are fragmented above a declared threshold, you can use the script below:

Please note that my threshold in my script is set to 30%


-- Declare database
use Patientdatadb
go 
-- Declare variables
SET NOCOUNT ON;
DECLARE @tablename varchar(255);
DECLARE @execstr   varchar(400);
DECLARE @objectid  int;
DECLARE @indexid   int;
DECLARE @frag      decimal;
DECLARE @maxfrag   decimal;

-- Declare maximum fragmentation to allow
SELECT @maxfrag = 30.0;

-- Declare a cursor
DECLARE tables CURSOR FOR
   SELECT TABLE_SCHEMA + '.' + TABLE_NAME
   FROM INFORMATION_SCHEMA.TABLES
   WHERE TABLE_TYPE = 'BASE TABLE';

-- Create the table
CREATE TABLE #fraglist (
   ObjectName char(255),
   ObjectId int,
   IndexName char(255),
   IndexId int,
   Lvl int,
   CountPages int,
   CountRows int,
   MinRecSize int,
   MaxRecSize int,
   AvgRecSize int,
   ForRecCount int,
   Extents int,
   ExtentSwitches int,
   AvgFreeBytes int,
   AvgPageDensity int,
   ScanDensity decimal,
   BestCount int,
   ActualCount int,
   LogicalFrag decimal,
   ExtentFrag decimal);

-- Open the cursor
OPEN tables;

-- Loop through all the tables in the database
FETCH NEXT
   FROM tables
   INTO @tablename;

WHILE @@FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
   INSERT INTO #fraglist
   EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')
      WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS');
   FETCH NEXT
      FROM tables
      INTO @tablename;
END;

-- Close and deallocate the cursor
CLOSE tables;
DEALLOCATE tables;

-- Declare the cursor for the list of indexes to be defragged
DECLARE indexes CURSOR FOR
   SELECT ObjectName, ObjectId, IndexId, LogicalFrag
   FROM #fraglist
   WHERE LogicalFrag >= @maxfrag
      AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0;

-- Open the cursor
OPEN indexes;

-- Loop through the indexes
FETCH NEXT
   FROM indexes
   INTO @tablename, @objectid, @indexid, @frag;

WHILE @@FETCH_STATUS = 0
BEGIN
   PRINT 'Executing DBCC INDEXDEFRAG (0, ' + RTRIM(@tablename) + ',
      ' + RTRIM(@indexid) + ') - fragmentation currently '
       + RTRIM(CONVERT(varchar(15),@frag)) + '%';
   SELECT @execstr = 'DBCC INDEXDEFRAG (0, ' + RTRIM(@objectid) + ',
       ' + RTRIM(@indexid) + ')';
   EXEC (@execstr);

   FETCH NEXT
      FROM indexes
      INTO @tablename, @objectid, @indexid, @frag;
END;

-- Close and deallocate the cursor
CLOSE indexes;
DEALLOCATE indexes;

-- Delete the temporary table
DROP TABLE #fraglist;
GO

Friday, August 1, 2014

Bulk data loading into SQL Server Database from Excel

The easiest way to load bulk data into SQL database is the commandline. Open the MS Management Studio New Query and run this command:

BULK INSERT dbo.MyTestDB FROM 'C:\Testfolder\Importfile.txt' 
WITH ( FIELDTERMINATOR =',', FIRSTROW = 2 )

or

BULK INSERT dbo.MyTestDB 
FROM 'C:\Testfolder\Importfile.txt' 
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO

Where dbo.MyTestDB is the table and 'C:\Testfolder\Importfile.txt' is the excel file

Also using MS Management Studio, it is very easy to load bulk data into SQL server database using the Import/Export GUI wizard.

Here are quick steps to follow to load database (csv or xls or xlsx) using GUI:

Step 1: Verify columns
You need to verify the columns on the table and match with data on the excel spreadsheet



Step 2: Run import wizard
Right-click on the table and choose "Import Data". This will launch the "Import/Export" wizard for SQL database.


The import wizard will open and will guide you through the whole data import process. Just follow the guide and you should be fine.



Step 3: Choose Source Host and database
The source host is the server name or hostname of the server where the data will be exported. Once the hostname has been specified, the database should automatically appear among the list in the dropdown. Choose the database you want to export from.


Step 4: Select source Table(s)
At this point, you need to select the source and destination tables. You can choose multiple tables at the same time.
Source is the table where data will be exported/copied from
Destination is the table where data will be imported/loaded


Step 5: Map the columns
At this point, you can map or edit the columns to be imported/exported.
You can use "Edit SQL" to edit how the table will be loaded.
Check "Source" to verify the source table.
Check "Destination" to verify the destination table.
If table is not already existing, SQL can create a new table when you check the radio button "Create destination table".
There is also an option to "Drop and recreate destination table".
To skip column(s), click on the destination column and change to "null".


Step 6: Feedback
This last page shows the confirmation that the data load was successful.

Thanks for reading!