DBAs are most responsible for configuring and managing of database infrastructure, database file locations, database encryption which do not have direct impact on users and applications. This lesson, which is still part of Configuring SQL Server 2012 Components, covers the following topics;
- Designing and Managing Filegroups
- Configuring and Standardizing Database
- Using Data Compression
- Transparent Data Encryption
- Partitioning Indexes and Tables
- Manage Log Files
- Using Database Console Commands (DBCC)
Designing and Managing Filegroups
Each database has 1 primary filegroup. This primary filegroup hosts the primary data files and you may also add secondary data files. It can have many secondary filegroups. The system tables are hosted in the primary filegroup. You can create multiple secondary filegroups for the purpose of optimization, disk allocation and maintenance. Secondary data files uses the extension .ndf and can be assigned to different filegroups, which can be located on different volumes. Also the database log file is not a member of any filegroup. It is it’s own filegroup.
You can add filegroups via the database properties in SSMS or by issuing the statement below
ALTER DATABASE [AdventureWorks2012] ADD FILEGROUP [NEW_FILEGROUP]
You can move indexes between filegroups but please note of the following conditions when doing so
- You cannot move indexes created using a primary key constraint through SSMS. But you can do this if you indexes by using the CREATE INDEX statement with (DROP_EXISTING=ON) option.
- If the table or index is partitioned, you must select the partition scheme in which to move it.
- You can move a clustered index by using online processing, allowing user access to data during the move.
Configuring and Standardizing Database
You can standardize the configuration of databases by configuring appropriate settings such as Auto Close, Auto Shrink and database recovery model on the model system database. Whatever the current configuration stored in the model database will serve as template when you create a new database. You can manually change the database properties in the SSMS or via the ALTER DATABASE statement.
Using Data Compression
There are 2 types of data compression; page and row. One drawback of data compression is increase in CPU usage because data is compressed and decompressed with every access. You can only use data compression in user database and for SQL Server 2012 only Enterprise and Developer edition supports data compression. There are 3 forms of data compression in SQL Server 2012: row-level, unicode and page-level compression.
You can configure data compression on the following
- Clustered tables
- Heap tables
- Non-clustered indexes
- Indexed views
- Individual partitions of a partitioned table or index
Row-level compression stores fixed-length numeric data types and char data types as though they were variable length data types. It does not store NULL or 0 values and reduces metadata required to store data. One advantage of row-level compression is that it uses less CPU compared to page-level compression.
To compress a table by using row-level compression
ALTER TABLE [yourtableName] REBUILD WITH (DATA_COMPRESSION=ROW)
And to configure an index with row-level compression;
ALTER INDEX [yourindexName] ON [yourtableName] REBUILD PARTITION ALL WITH (DATA_COMPRESSION=ROW)
Unicode compression enables your database engine to compress unicode values stored in page or row compressed objects. This type of compression is automatically used where appropriate when enable row-level or page-level compression.
Page-level compression compresses data by storing repeating values and common prefixes only once and then making references to those values from other locations within the table. Page-level compression is always used together with row-level compression. Data that involves may repeating values will be more compressed than data populated with more unique values.
To compress a table by using page-level compression
ALTER TABLE [yourtableName] REBUILD WITH (DATA_COMPRESSION=PAGE)
You the same ALTER INDEX command as with (DATA_COMPRESSION=PAGE) option when configuring an index for page-level compression.
The best way to determine the benefits of compression on an object is to use the sp_estimate_data_compression_savings stored procedure. This is only available in SQL Server 2012 Enterprise edition only.
For example, to estimate the compression benefits of using row compression on the HumanResource.Employee table in the AdventureWorks2012 database, issue the following statement;
EXEC sp_estimate_data_compression_savings ‘HumanResource’, ‘Employee’, NULL, NULL, ‘row’;
For page-level compression estimation on the compression savings on AdventureWorks2012 database;
EXEC sp_estimate_data_compression_savings ‘HumanResource’, ‘Employee’, NULL, NULL, ‘PAGE’;
There is also a GUI for this in SSMS wherein you can calculate the savings of the data compression you have just set. To launch the wizard, right click on the table selected for compression>Storage>Manage Compression. The CALCULATE button can help you decide which type of compression will give you more amount of space. This wizard is also available at the index level of the table.
Transparent Data Encryption
Transparent data encryption (TDE) enables you to encrypt the entire database. TDE protects the database against unauthorized third parties gaining access to the harddisk or database backups. TDE encrypts the database by using Database Encryption Key that is stored in the database boot record. The DEK is in turn protected by the database master key, which is in turn protected by the service master key. You can use BitLocker Drive Encryption supported by Windows Server 2008 and Windows Server 2008 R2, though database backups are not encrypted. If any database on the server instance uses TDE, the tempdb is also automatically encrypted.
1. Create the master encryption key. You can do this by using the statement CREATE MASTER KEY ENCRYPTION BY PASSWORD statement.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘MyPassw0rd’
2. Create the certificate protected by the master key.
CREATE CERTIFICATE ServerCertificate WITH SUBJECT = ‘Server Certificate’;
3. Create the DEK for the specific database.
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128 ENCRYPTION BY SERVER CERTIFICATE ServerCertificate;
After executing above statement, you will be issued a warning as below.
Warning: The certificate used for encrypting the database encryption key has not been backed up. You should immediately back up the certificate and the private key associated with the certificate. If the certificate ever becomes unavailable or if you must restore or attach the database on another server, you must have backups of both the certificate and the private key or you will not be able to open the database.
This means that you should back up of the server certificate in the master database. If for some reason you need to restore a backup copy of that encrypted database in another instance, you cannot access the data in the database protected by TDE without the certificate. You can use the BACKUP CERTIFICATE statement to create a backup of the certificate and private key, both of which are required for certificate recovery.
BACKUP CERTIFICATE ServerCertificate TO FILE = ‘ServerCertificateBackup’ WITH PRIVATE KEY
FILE = ‘PrivateKeyFile’, ENCRYPTION BY PASSWORD = ‘MyPassw0rd’
The backup file will be written to the \MSSQL\DATA folder of your server instance.
4. Encrypt the database by using ALTER DATABASE statement
ALTER DATABASE [AdventureWorks2012] SET ENCRYPTION ON;
Partitioning Indexes and Tables
Partition is mainly dividing indexes and tables across multiple filegroups. This is great way to deal with large tables in your database. When creating a partitioned table or index, follow the below steps;
Create the partition function that assigns rows of a table or index to partitions based on the values of the column. See my sample script below;
CREATE PARTITION FUNCTION pfRange500k (int)
AS RANGE LEFT FOR VALUES (500000)
The RANGE LEFT FOR VALUES means values left of 500000, means any value from 0 to 500000 will go to Data partition below, and anything above that will go Data2 partition below.
Create the filegroups that will hold the partitions. You may have done this already when designing for a database via SSMS. It would be cooler if you can create multiple filegroups when you try this out.
Create a partition scheme that maps the partitions to filegroups.
CREATE PARTITION SCHEME psSalesFGs
AS PARTITION pfRange500k TO (Data, Data2);
Create or modify an existing index or table and specify the partition scheme. To create a table on the partition scheme psSalesFGs we created earlier;
CREATE TABLE SalesBigTable
[SaleID] int PRIMARY KEY,
[ProductID] int NOT NULL,
[Quantity] smallint NOT NULL,
[SaleAmount] numeric(16,2) NOT NULL,
[SaleDate] datetime NOT NULL
) ON psSalesFGs(SaleID)
To launch the wizard for creating a partition for a table, right click on the table>Storage>Create Partition.
Managing Transaction Log
To manage your transaction log files and check current usage and free space you can do so via SSMS. Right click on the database>Tasks>Shrink>Files to open the dialog box for Shrink File, as below.
On the file type dropdown list select LOG. And from there you can view the allocated space and the available free space.
Log truncation is a process by which the Database Engine frees space in the logical log for reuse by the transaction log. Log truncation occurs automatically in the following cases;
- When a database is using simple recovery model, the Database Engine truncates the transaction log after checkpoint. The checkpoints are triggered each time the number of records reaches the number that the Database Engine determines it can process during the recovery interval server configuration.
- When a database use the full recovery model or bulk-logged recovery model, the transaction log is automatically truncated after the transaction log backup is completed and as long as a checkpoint is done after the last backup.
You can use the following DBCC command to view transaction log information:
This will give you the following results;
For better understanding of the internals of a transaction log file, this link is a great read.
Using Database Console Commands (DBCC)
DBCC commands enable you to perform SQL Server 2012 administration task by using queries instead of using the GUI, plus it makes me look more geeky and cool 🙂 They are grouped into 4 main categories;
So, here’s a sample script for some of the DBCC commands per category
/* Informational Commands */
DBCC SQLPERF(LOGSPACE) –View transaction log size info
DBCC USEROPTIONS –View current connections user set options
/* Maintenance Commands */
DBCC DBREINDEX(‘Person.Person’) –Rebuilds indexes on a table
DBCC CLEANTABLE(AdventureWorks2012, ‘Production.ProductDescription’) –Reclaims unused spaced from tables
/* Validation Commands */
DBCC CHECKTABLE(‘Person.Person’) –Integrity check of table pages/structure
DBCC CHECKFILEGROUP –Integrity check of filegroup structure/allocation
/* Misc. Commands */
DBCC HELP(‘CHECKDB’) –Syntax information for DBCC statements.. use ‘?’ for list
For more information on lots of DBCC commands and trace flags DBCC TRACEON you can go to books on line or simply highlight the DBCC command in the query window and press F1 and viola! you got everything about that DBCC command plus some samples too, neat!