Flirting with the idea of taking exam [70-432: Microsoft SQL Server 2008, Implementation and Maintenance] I created loads of post-it notes based on the book: “The Real MCTS SQL Server 2008 Exam 70-432 Prep kit”. I’ve decided to re-visit these notes (before the stickiness wears off and I lose them) and transfer them onto this blog. This will ultimately be a great knowledge refresher and quick reference guide for me. Hopefully you’ll find them useful too. Thanks for reading.

bcp

BCP

  • CMD line (runs outside SQL process)
  • Limited to SQL and Flat files
  • Case sensitive options
  • Use double quotes when export using query
  • No field terminator specified – it will use tab
  • No row terminator specified – it will us line feed (\n)
  • (-F) format files (-X) xml format
  • (-b) batch size, run in parallel, distribute load
  • SQL Server > BCP > Text file and vice-versa

BCP Options (case sensitive)

  • Query  (instead of table)
  • -N (Unicode)
  • -n (native)
  • -c (Character data type)
  • -t (field terminator)
  • -r (row terminator)
  • -f (format file) –x (xml)
  • -F (first row skipped) –L (last)
  • -b (batch size)
  • Native format best for in between SQL Servers
    • Regular native (-n)
    • Unicode native (-N) – best likelihood of transferring all chars in nchar and nvarchar

Bulk Insert

  • T-SQL
  • Only import data
  • No export (like BCP)
  • BULK INSERT (db..table) FROM (datafile) WITH
  • BULK INSERT and BCP work with FLAT FILES only.

Bulk Insert Options

  • ‘char’ – character format
  • ‘native’ – native format (performance, better than char)
  • ‘ widechar’ – Unicode
  • ‘widenative’ – native Unicode format (performance, better than widechar)

Recovery Model + Bulk Operations

  • Full – everything is logged, slower, but recover in full
  • Bulk logged
    • Logs normal things in full, bulk operations minimally
    • Instead of logging details of each row, it records which extents and pages where updated
    • Less I/O = faster
    • Downside – log file does not contain all the information
    • When you back up the log, the changes pages/extents are backed up
    • Log file can remain small, but the backup file could be big
    • You should perform a backup after bulk insert
    • If you had full recovery model, change to bulk, do the import, then change back to full and then backup the log file
    • Minimul recovery requires the following:
      • Not replicated
      • Tablock placed
      • No indices (unless empty)

Optimize Bulk load

  • Multiple batches = each batch = 1 tran
    • Only fails that particular batch
    • Can have multiple clients doing the processing
    • Single batch is default, entire load is success or failure, cannot clear log file until complete
    • Use bulk-logged model
    • Pre-sort to match key

Ad-hoc distributed queries

  • Allows clients to write queries that access external data
  • Enable advanced options ‘ad-hoc distributed queries’, 1
  • Each provider (allow or dissalow)
  • Option is a negative property (tick to disable)
  • Sys_admins can still use even if disabled

OPEN ROWSET()

  • Access data across a data source
  • Can be used for bulk load (must use format file)
  • Linked server not required
  • Returns a table e.g. select * from openrowset(…)
  • Can be used with INSERT, UPDATE, DELETE

OPEN ROWSET(Bulk,…)

  • Must use a format file
  • Reads entire data file as single value
    • SINGLE_BLOG – large binary
    • SINGLE_CLOG – character value
    • SINGLE_NCLOB – Unicode value
    • Reads data as a single column

Differences

  • OPENROWSET – use in place of a table – can pass query
  • OPENDATASOURCE – use in place of instance name/FQ Name
  • OPENQUERY – Pass through query like OPENROWSET but used linked servers!

Linked Servers

  • Sites on top of OLEDB
  • Can specify a connection user
  • Replace instance with (linkedserver.database.schema.table)
    • Sp_addlinkedserver (add the server)
    • Sp_serveroption (configuration)
    • Sp_addlinkedserverLogin (security
    • Data Access Option – true – allows select/insert/update/delete
    • RPC in/out/ – true – allows use of stored procedres
    • Master.sys.server – list of linked servers

Distributed Transactions

  • Changes to remote and local system done as a transaction
  • Select/insert/update/delete on remote servers
  • BEGIN DISTRIBUTED TRANSACTION
  • Uses windows MSDTC
    • Transaction manager manages the across resource manager
    • Resource managers are the servers involved
    • 2 phase commit
      • Prepare – locks and verify that it will work
      • Commit – DTC instructs to commit otherwise rollback

SSIS

  • Extract, transform and load
  • SSIS is an xml file with .dtsx extension
  • Use BIDS to create
  • Package configuration  
    • XML
    • SQL Table
    • Variable
    • Registry
  • Create SSIS using
    • Import/export wizard
    • Copy database wizard
    • BIDS
    • *maintenance plan
  • SSIS Service
    • One per server even if multiple instances
    • Manages storage, security on package
    • SSMS connect to IS, only uses Win authentication
  • Storage
    • Msdb (sysssispackages)
    • File system (…\100\dts\packages\)
  • Deploying SSIS
    • Deployment wizard – creates manifest file
    • Object explorer – import/export wizard
    • Dtutil – cmd untility
  • MSDB roles
    • Db_ssisadmin
    • Db_ssisltduser
    • Db_ssisoperator
  • Execute SSIS
    • DTExec
    • DTExecUI
    • Object Explorer
    • SQL Agent
    • Programmatically
  • SSIS logging
    • Windows Event log
    • Text files
    • Xml files
    • SQL Tables
    • SQL profiler trace

Copy Database Wizard

  • Only works between SQL Servers
  • Copies entire database
  • Can use for transfer 2000 to 2008
  • Method 1 – Detach/Attach
    • Efficient
    • Source must go offline
    • SQL agent on destination must be able to see mdf/ldf etc.
    • Method2 – SQL Management Object
      • Slower
      • Creates objects then populates
      • Agent service does not require access like method 1
      • Copy wizard also prompts for following:
        • Logins
        • User defined error msgs
        • SP’s in master
        • SQL Server agent definition
        • End points

 Import Export Wizard

  • SQL Server to any format
  • Any format to SQL Server

 

Thanks for reading.

 

Advertisements

Flirting with the idea of taking exam [70-432: Microsoft SQL Server 2008, Implementation and Maintenance] I created loads of post-it notes based on the book: “The Real MCTS SQL Server 2008 Exam 70-432 Prep kit”. I’ve decided to re-visit these notes (before the stickiness wears off and I lose them) and transfer them onto this blog. This will ultimately be a great knowledge refresher and quick reference guide for me. Hopefully you’ll find them useful too. Thanks for reading.

SQL Fragmentation

Data Collation

  • Allow to store Multiple languages in a single column
  • Specify languages for instance, db or column
  • Set of rules for sorting, storing and comparing text
  • Affects most DB operations
  • Affects query results, import/export, backup and restore
  • Model db collation will be used for new databases
  • Default is set to most appropriate based on win regional settings
  • Collation rules
    • Case sensitive (CS) – upper vs lower case in sorting order
    • Accent sensitive (AS) – vav = or != vӒv
    • Kanatype sensitive (KS)  –hiragana = or != katakana
    • Width sensitive (WS) –  char that is the same but 1byte vs 2 byte, same?
    • Default unless  specified – CI AI KI WI
  • Windows Collation
    • Based on regional  settings
    • Unicode and non-unicode sorted the same way (sort vchar = sort nvarchar)
    • Choose win collation if possible
  • SQL Server Collation
    • Backward compatibility
    • SQL 6.5 and 7
    • Unicode != non-unicode sorting
    • Prefixed with sql_***
  • Binary Collation
    • Sort and compare based on binary value
    • Improve query performance
    • Simpler than other collations
    • Suffix of BIN_ (backward compatibility) or BINX_ (new)
  • SQL 2008 has 80 new collations
  • Select * from fn_helpCollations – list of collations
  • Temp tables use the user db collation, not tempdb
  • Database level collation
    • Uses server as default if not specified
    • Default for columns
    • Used in any queries against db
    • T-SQL > CREATE  database
  • Column Level Collation
    • Uses db if not specified
    • Char, nchar, varchar, nvarchar, text, ntext
    • T-SQL > CREATE database
  • Expression level collation
    • Overrides db collation
    • T-SQL > COLLATE clause
  • How to decide on  Collation
    • Upgrading from 5.6/7 – use SQL
    • Languages not used, use win suggested
    • Many languages, choose the most common
  • Collation – Backup and Restore
    • DB always retain source collation

Data Compression (ROW | PAGE)

  • Save space
  • New to 2008 (enterprise and developer)
  • Decreases performance (writes)
  • Two types of compression Row or Page
  • Applied to tables and indexes, heaps and clustered indexes
  • Use compression on non-clustered (stored separately)
  • Can configure compression on individual partitions
  • CREATE TABLE/INDEX with DATA_COMPRESSION ROW | PAGE
  • ALTER TABLE REBUILD WITH DATA_COMPRESSION ROW | PAGE | NONE (can be time consuming)
  • Compression not for system tables
  • ROW Compression
    • Reduces space by converting fixed length to variable length
    • Allows you to fit more rows per page
    • No effect on tinyint, smalldatetime, date, unique identifier, text, image, xml, table, sql_variant, cursor
    • No effect on fields that are already variable
  • PAGE Compression
    • Does the following
      • Row
      • Prefix (repeating values in column, replaced by ref#)
      • Dictionary (repeating values in page, replaced by ref#)
    • Page compression goes further than row
    • Page compression profound effect on writes
    • Enable Page compression, enables row by default
  • Estimated space savings: sp_estimate_data_compression_savings

Sparse Columns

  • Reduce amount of space taken up by NULLS
  • Reads take longer on non-null values
  • Should only be used on columns with lots of NULLS
  • Column1 varchar (50) SPARSE NULL
  • ColumnSets is a calculated column, returns XML of all SPARSE columns
  • ColumnSets are updateable
  • Sparse columns appropriate for filtered indexes

Maintaining Indexes

  • Heap – table with no index
  • Clustered index – sorts the table, 1 per table, index within table
  • Non-clustered index – separate IDX – can have many.
  • If CREATE or DROP Clustered index, it rebuilds all non-clustered
  • If dropping lots of indexes, drop non-clustered first, then clustered
  • Cannot drop an index that is supporting a primary or unique constraint
  • Update row – index either in place update or delete and insert
  • Delete row – index corresponding row deleted. Either delete or mark for delete (ghosting) ghost records ignored. Used for fast rollback
  • SELECT INTO creates heaps – lower fill factor – more space on disk
  • CREATE CLUSTERED INDEX xxx on SALES (salesid, ordered) include (xxx, yyy)
    • Salesid – foreign key
    • Salesid and ordered – combined primary key
    • Salesid – good for where, groupby part of query
    • Include – good for select fields in query
  • Index objects
    • Sys.dm_db_index_usage_stats
    • Sys.dm_db_missing_index_details
    • Sys.dm_db_missing_index_columns
    • Sys.dm_exec_query_stats
    • Sys.dm_db_index_operational_stats
    • Dbcc showcontig(employees)
  • CREATE INDEX
    • Fillfactor – desired page fullness
    • PadIndex – if fill factor impacts non-leaf level
    • Sort in tempdb – sorting while creating
    • Ignore dupkey – for unique contraints
    • Drop_existing – use free space of old one
    • Online – access while creating (dev, ent, eval only)
    • Maxdop – max cpu’s creating indexes
  • Internal vs External Fragmentation
    • External
      • Logical order of pages stretches across extents
      • SQL spends more time accessing drives
      • Split page, SQL must load 2 extents (16 pages)
    • Internal
      • Refers to empty space within pages (deleted data)
      • Improves write performance (insert without re-arranging pages)
      • Decreases read performance (load more extents)
  • Index Rebuild vs Re-organize
    • Rebuild
      • More than 30% fragged
      • Not accessible unless ‘ONLINE’ – alter index
      • Very intensive
      • Can specify fill factor
      • Rebuilding does update stats (no need to do separately)
    • Re-Organize
      • Less than 30%
      • Not as intensive
      • Users can access (use page locks)
      • Takes longer
      • Cannot specify fill factor
      • Can cancel – no rollback
      • Not as thorough
  • ALTER INDEX command
    • Can DISABLE index
    • Rebuild Index (REBUILD)
    • Re-Organize Index  (REORGANIZE)
    • Change Index Options
    • Cannot add or change columns must use CREATE INDEX DROP EXISTING
  • Index FILL FACTOR guidelines
    • Read only database – fill factor 100%
    • Mostly Writes – fill factor 50%-70%
    • Both Read and wrote – fill factor 80%-90%

SQL Indexes

DBCC Explained

  • Most DBCC commands are actioned on a snapshot of the master DB – then deleted after use
  • Avoids congestion on  main db
  • Processor intensive, don’t use on heavy load server
  • Consumes space in tempdb
  • DBCC CHECKTABLE
    • NOINDEX command skips nonclustered index (speed up process)
    • Or specify the index ID
    • Cluster index, heaps are always checked
    • REPAIR FAST, REPAIR_ALLOW_DATA_LOSS, REPAIR_REBUILD
    • DB must be in single user mode for a repair to occur
    • ALL_ERRORMSG |  NO_INFOMSGS
    • TABLOCK – no snapshot, do on main table
    • ESTIMATEONLY – how much space it will use in tempdb
    • PHYSICAL_ONLY – physical structure of pages, rows etc
    • A DBCC CHECKDB runs the following commands on every object in the table:
      • Checkalloc
      • Checktable
      • Checkcatalog

TLOG Shrink vs TLOG Truncation 

  • TLOG TRUNCATION
    •  Truncating a log file will change one or more VLF file’s status from active to inactive
    • Occurs at checkpoint (Simple recovery mode) or when you backup the LOG (Full/Bulk recovery mode)
    • If full/bulk recovery, checkpoints simply write dirty pages to disk.
    • You have to perform a tlog backup to force truncation in full/bulk – cannot rely on checkpoint – does nothing.
    • Truncation – Does not change the physical size of the TLOG (just frees up VLF’s to be used).
    • There is an option in TLOG (Auto-Shrink) which would physically shrink the file periodically.
    • You can manually force log truncation by issuing backup log (NO_LOG or TRUNCATE_ONLY)
    • BACKUP LOG WITH TRUNCATE_ONLY or WITH NO_LOG is deprecated. Note:  When you use this command you can’t do regular log backup and point in time recovery too. This command will truncate the log files (VLF) and break the log chain. To maintain the log chain again you have to run full or differential backup asap.
  • TLOG SHRINKING
    • Shrinking the log file will reduce the physical log file size.
    • Shrinking the log file is not best practice until you don’t have any other option.
    • How to avoid the shrinking?
    • Frequently run the log backup and maintain the log size as small as possible.
    • Alter the database and point the log file to another drive/Add new drive.
    • Ready to accept the data loss? – Then puts the database in simple recovery mode and run the differential backup every 2 to 3 hours.
    • You can manually shrink the log file by using: DBCC SHRINKFILE (Test_Log)
    • Important note: Shrinking the data/log file will increase the fragmentation. Try to truncate the log file instead of shrinking. This means running the log backup more frequently.
    • Sometimes your shrink command will not shrink the database. When there is an active portion of log is available OR the transaction is not closed (not yet committed).
    • DBCC_SHRINKFILE
      • EMPTYFILE – moves all data to another file in filegroup. Then you could delete file (alter database). SQL will not write to that file again.
      • TARGET_SIZE – Tries to shrink down to that size
      • NOTRUNCATE – Although data moved, not released to OS
      • TRUNCATEONLY – release space to OS from end of extent, no page rearrangment 

Database Recovery Modes

  • Recovery model determines how SQL Server uses the logs for a database
  • 3 recovery models
    • Simple – logs re-used as soon as full
    • Full – all logs kept, all transactions written to log, all data recoverable
    • Bulk Logged – similar to fill, but ignore non-logged operations
  • System tables in simply mode, except msdb can be full

Database Backups

  • Can be done online, however resource intensive
  • Full backup – backup all data + small portion of log while backup was running
  • Diff Backup – backup extents modified since last full backup
  • Partial Backup – backup file groups and partitions extents that were modified since last full or      partial backup
  • File/Filegroup backup – full backup of file or filegroup
  • Copy-only – same as full however does not disrupt LSN / transactional log chain
  • Transaction log – backup log entries since last full backup. Also truncates ldf file.
  • Tail log backup – backup t-log just before a restore operation

Compressed Backups

  • Enterprise Only, SQL 2008 only
  • Any edition can restore
  • Performance advantage
  • More processor intensive
  • Cannot be read by 2005, 2000, 7 etc

Restoring Databases

  • During full restore db is usually not available
  • Some cases we can restore only part, leave rest of db online
  • Online restore is default when performing file, filegroup, piecemeal restores
  • Restoring individual pages – check suspect_pages table in msdb
    • Suspect page in index – rebuild index
    • Suspect page in table – restore from backup
  • System tables DR – if you can start sql, restore system databases. If you cannot start SQL, then you need to rebuild system tables

SQL Server Agent

  • 3 Key concepts – jobs, alerts and operators
  • Info stored in msdb
  • There is a separate agent service for each instance
  • Dbmail uses smtp, not reliant on sql service
  • Maintenance plans – must be part of sysadmin to view
  • Maint plans – msdb..sysmaintplan_log and sysmaintplan_logdetail

Policy Based Management

  • Introduced 2008
  • View/configure settings multiple servers
  • Apply changes to many
  • Enforce standards – aka – explicit admin
  • Access under management > Policy Management
  • 3 concepts
    • Targets
      • Objects to be managed (db, instance, table, index etc)
      • Can have target sets (e.g. all db’s hr_*)
    • Facets
      • Properties of a target
      • server facet – collation, version etc
      • db facet – name, size collation
      • index facet – etc etc
    • Conditions
      • States allowed for target e.g. name like table_xxx
  • 4 Evaluation Modes
    • On demand
    • On change: persistant (ddl trigger)
    • On change: log only (ddl trigger)
    • On schedule: agent
    • Above are used to check if target complies with policy
    • Writes to event log
    • Sample policies in xml format available which can be imported [tools\policies\dbengine\1033]

Data Collector

  • Collect and consolidate performance data
  • Made up of agent jobs, data structures, RS  and tools
  • RS and SQL Agent is required
  • All editions except express
  • DC components
    • Management Data Warehouse stores data
    • Data providers – perfmon, trace, queries
    • Collection items – what to collect, how often. Collection sets (grouping)
    • Data Collection Reports (RS)
    • Proxy accounts – used       to access data sources

Thanks for reading.

Flirting with the idea of taking exam [70-432: Microsoft SQL Server 2008, Implementation and Maintenance] I created loads of post-it notes based on the book: “The Real MCTS SQL Server 2008 Exam 70-432 Prep kit”. I’ve decided to re-visit these notes (before the stickiness wears off and I lose them) and transfer them onto this blog. This will ultimately be a great knowledge refresher and quick reference guide for me. Hopefully you’ll find them useful too. Thanks for reading.

RAID Levels

Concepts

  • Clustering, log shipping, replication, mirroring
  • Hot-add CPU & memory support
  • Fault tolerant disk sub-systems (RAID)

RAID (Redundant Array of Independent Disks)

  • All RAID levels require at least 2 disks
  • RAID 0 – STRIPED
    • Very fast, very good performance
    • No redundancy
    • Writes to many disks
    • If any disk fails, RAID array is lost
  • RAID 1 – MIRROR
    • No performance benefit
    • Provides redundancy
    • Data written to first disk then mirrored to other
    • Expensive solution, two hard drives only really using one
  • RAID 5
    • Most common
    • Best of RAID  0 and 1
    • Requires at least 3 disks
    • Writes data to all disks except 1 (last disk is for parity)
    • Writes are slower (additional parity recording) read is ok
  • RAID 10 (1+0)
    • Highest performing
    • High redundancy
    • Most expensive
    • At least 4 drives
    • Basically 2 RAID (performance) and then mirrored

SQL High Availability Options

  • Log shipping (licence all servers)
  • Replication (licence all servers)
  • Mirroring (licence all servers)
  • Clustering (licence active node only)
  • AlwaysOn (new to 2012 – if using the passive node, then license)

SQL Recovery models and HA Options

  • Log shipping – Full
  • Mirroring – Full
  • Replication – Simple or Full (depends on recovery)
  • Clustering – Simple or Full

Log Shipping

  • Since 2000
  • Uses t-logs, send to nodes, which restore the log at intervals
  • Included in all editions that you pay for, big change from 2000 – only avail to Ent edition
  • Must use Full/Bulk-log recovery model
  • Share drive location required for logs to be backed up to and read from
  • Can configure compression – usually not required unless WAN
  • Backup node has option of ‘no recovery’ or ‘standby’ mode
    • Standby – allow user access. Configure restore job to kill users
    • No Recovery – does not allow any access
  • Can ship to multiple nodes
  • No quick and easy way to failover
  • Bring backup server online [ RESTORE LOG xxx WITH RECOVERY]

Mirroring

  • Introduced in 2005 (SP1 Production ready)
  • DB by DB HA solution
  • Only user databases can be mirrored
  • Login, SQL jobs, SSIS not mirrored
  • Port 5022 – default port. If mirrored instances are on same server; need different ports
  • CREATE ENDPOINT ‘xxx’ state = started as tcp (listener port=5022) for dbxxx
  • Principal databases show word ‘principal, synchronized’
  • Mirrored databases show word ‘restoring’
  • You can failover using SSMS or [ALTER database xxx set partner failover]
  • If you setup mirroring using t-sql (full safety is set automatically)
  • Synchronous Mode
    • Manual or Automatic failover (witness server)
    • High Safety mode
    • Writes are hardened on both servers before ACK sent back to Client
    • Automatic failure (can configure failover partner=backupserver in connection string)
  • Asynchronous Mode
    • Performance mode
    • Command first written to server
    • ACK sent back to client
    • Command sent to mirror server
    • Mirror sends ACK back to Primary

Clustering

  • Uses MSCS – Microsoft Cluster Service
  • Failover entire server from one node to another
  • Single set of hard disks
  • SQL Service only runs on one node at a time. On the passive node it’s installed but stopped
  • On failover, services are started on the new active node
  • Requires a clustered quorum drive for cluster settings
  • Concept of resources (service, drive, netshare, ip, hostname)
  • Failovers entire resource groups
  • Server nodes all poll each other, quorum decides. If quorum is down other vote between      themselves
  • Use the SQL Server Cluster Configuration tool
  • Windows cluster service does not have to run under domain account – uses local sys.
  •  The SQL Services should use domain account (except browser)
  • Public network (clients connect to instance)
  • Private network (communication between nodes)
  • Can use Filestream – KB937444
  • Install on Primary node and then add node wizard for all other nodes
  • Patches, apply to primary test then apply to others
  • SQL 2000 – enterprise edition only
  • SQL 2005 – Standard and Enterprise
  • SQL 2008  – Standard and Enterprise
  • SSIS – further configuration [MsDTSSrvr.ini.xml] in DTS\BIN folder
  • Install, configure and cluster MSDTC for distributed transactions
  • See my previous clustering related post for further details and examples

Replication

  • See chapter 9 – Click Here – lots of detail – to be completed.

Expandibility

  • Hot Add CPU
  • Hot Add Memory

Thanks for reading.

Flirting with the idea of taking exam [70-432: Microsoft SQL Server 2008, Implementation and Maintenance] I created loads of post-it notes based on the book: “The Real MCTS SQL Server 2008 Exam 70-432 Prep kit”. I’ve decided to re-visit these notes (before the stickiness wears off and I lose them) and transfer them onto this blog. This will ultimately be a great knowledge refresher and quick reference guide for me. Hopefully you’ll find them useful too. Thanks for reading.

TDE

Transparent Data encryption (TDE)

  • Introduced in 2008
  • Encrypts data at rest, no application changes and min performance hit
  • Decrypted in memory
  • If TDE is enabled, backups are encrypted
  • TempDB is encrypted
  • Cannot encrypt other system db’s
  • Enterprise/Developer edition only

Enabling TDE

  • Create service master key on instance (SMK)
  • Create DB master key + Cert in master db (DMK)
  • Create DB encryption key in user db (DEK)
  • Then enable TDE in the DB.
  • TDE based on Data Protection API (DPAPI), introduced Win200, used since SQL 2005
  • TDE can only be enabled using T-SQL, no options in SSMS

TDE hierarchy

  • Create service master key on instance (SMK)
    • Created by SQL Server at setup
    • Encrypted using OS DPAPI and local key based on service account credentials
    • BACKUP SERVICE MASTER KEY TO FILE = …
    • Used to encrypt/decrypt all keys on server
    • Create DB master key + Cert in master db (DMK)
      • DMK is stored in master db
      • DMK is encrypted using Triple-DES algorithm
      • CREATE MASTER KEY ENCRYPTION BY…
      • CREATE CERTIFICATE <> WITH
      • BACKUP CERTIFICATE
      • Create DB encryption key in user db (DEK)
        • Stored in boot record of encrypted db
        • Created using DMK cert
        • CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256 ENCRYPTION BY SERVER CERTIFICATE <>
        • TDE encrypts data and log using DEK

Encryption Algorithms

  • 3 Key Triple DES
  • AES (128, 192 and 256bit)
  • T-SQL – WITH ALGORITHM

How?

  • Encrypted at page level
  • Performed in memory before/after writing/reading to disk
  • Encrypted/decrypted using DEK
  • Is_crypted column set to 1 in sys.databases
  • Uses shared locks while en/de-crypting

Encryption Keys

  • Symmetric
    • User same pswd to encrypt/decrypt
    • faster
    • Asymmetric
      • Use public pswd to encrypt and private to decrypt
      • More secure
      • Uses certs (container for asymmetric key)

Certificates

  • Encrypt data between SQL and apps over networks
  • Support db mirroring
  • Support server broker conversation
  • Sign SSIS packages
  • Certs are securable – can assign permissions
  • SQL Server can use internal generated certs or externally generated ones.

Key management

  • Create, backup, restore, change keys
  • Default – stores all with SQL
  • Can use 3rd party, external key management (EKM) & hardware based security model (HSM)
  • Plug-in, handles en/decrypt and management
  • HSM – dedicated HW handles en/decrypt
  • NOTE: SQL 2008, EKm does not support certs

Cell/Column encryption

  • Introduced SQL 2005
  • Performance and space costs
  • Must be varbinary
  • EncryptByAsymKey
  • DecryptByAsymKey
  • Encrypted column cannot be used as Index or in WHERE clause

OS Encryption

  • W2K introduced encrypting file system (EFS)
  • Protects data at rest
  • Generally used for single computer/user
  • EFS will result in performance hit in SQL.
  • TDE cannot encrypt system db’s, but EFS could be used.

Backups

  • Once TDE is enabled, backups are encrypted
  • Cannot restore on another server
  • Need to create server cert on new server from backup of source server
  • Once key is restored, you can then restore the DB

Thanks for reading.

This is a follow on to my initial post which was a high-level overview of the SQL Server 2012 Always On feature.

Since then a project has come up which requires SQL Server Always On feature and SQL Server Master Data Services. In preparation for this task I started compiling high level steps on what would be involved when installing/configuring AlwaysOn.

Hopefully you’ll find it useful too.

Windows Failover Cluster

  • Install/configure Windows Failover Cluster feature on all nodes that will participate in AlwaysOn using Failover Cluster Manager
  • Add the nodes that will participate in the cluster
  • Provide Cluster name (windows name) and VIP – this creates DNS and AD objects.

Enable AlwaysOn in SQL Server Configuration Manager

  • SQL Server Configuration Manager – SQL Server properties – Notice the extra tab for AlwaysOn. The value here displays the Windows Cluster name. If the Windows cluster is not configured, it will mention this in the field. Check the AlwaysOn text box.

AlwaysOn

  • Repeat this on each node individually, or use SQL Powershell cmdlets from SSM.
[Enable -SqlAlwaysOn -ServerInstance xxx -Force]

Configure AlwaysOn

  • SSMS, AlwaysOn section, use the AlwaysOn wizard.
    • Create AlwaysOn Name
    • Select the databases to be part of AlwaysOn
    • Select Replicas, Automatic Failover, Synchronous commit, Read only on Secondary
    • Add other nodes, with same settings as above.
    • There are other tabs here, Endpoints, backup preferences and listener.
    • Select initial data synchronization – http://msdn.microsoft.com/en-us/library/hh231021.aspx
    • Allows you to script out process or let wizard do the work.

Simulating Failover

  • SSMS, Right click the AlwaysOn Availability group and select failover
  • Select instance to use as failover node.
  • Connect to that instance
  • A script can be generated at this point for this process or let the wizard action:
ALTER Availability group [xx] FAILOVER
  • Applications will fail / Clients disconnect – if specifying the instance name.
  • Challenge is to make applications failover seamlessly.
  • This is where we use the listener tab mentioned above.
  • Add the listener name.
  • This creates the AlwaysOn listener name which is the virtual name for the AlwaysOn group.
  • If the application or client is specifying this listener name then the failover will be seamless to the client, redirecting to the correct instance that’s hosting the primary replica.

Backups

 

Thanks for reading.

Flirting with the idea of taking exam [70-432: Microsoft SQL Server 2008, Implementation and Maintenance] I created loads of post-it notes based on the book: “The Real MCTS SQL Server 2008 Exam 70-432 Prep kit”. I’ve decided to re-visit these notes (before the stickiness wears off and I lose them) and transfer them onto this blog. This will ultimately be a great knowledge refresher and quick reference guide for me. Hopefully you’ll find them useful too. Thanks for reading.

DB ownership chaining

Principals

  • Principals = entitity = can request SQL server resource.
  • Every principle is assigned an SID
  • Windows level principal have windows permission scope
    • Domain login, Local login, Group login
  • SQL Server level principal have server-level persmissions
    • SQL login
    • SQL login mapped to WIN Login/Cert or Asym Key
  • Database level principal have db permissions
    • DB User
    • DB User mapped to SQL Login, Win Login, Cert, Asym Key, DB role, App role or Public Role

Hierarchy of Securable Objects

SQL securables

Roles

  • Server roles – fixed
  • Database roles – fixed or user (flexible)
  • Logins/users added to these roles
  • Roles have specific permissions and purpose
  • Server roles (sysadmin, serveradmin, securityadmin, processadmin, setupadmin, bulkadmin, diskadmin, dbcreator)
  • Database roles (db_owner, db_security_admin, db_accessadmin, db_backupoperator, db_ddladmin, db_datawriter, db_datareader, db_denydatawriter, db_denydatareader)
  • Number of stored procs and table that you can use to work with roles.

Users

  • Database principals
  • Mapped to login, cert or asym key
  • After create a login, you create a db user
  • CREATE USER user_name FOR|FROM LOGIN|CERTIFICATE|ASYM KEY

Schema

  • Database schemas are collections of objects (tables, views, procedures)
  • Powerful way to manage permissions
  • Users are limited to schemas which they are granted access too
  • Schemas must be transferred to another user/role if schema owner is deleted

Users vs Logins

  • Authentication – let them in
  • Authorization – what can they access
  • Logins provide access to SQL instance. It’s the DB user level (mapped to the login) that provides access to objects contained within databases.

Authentication Mode

  • Mixed Mode Authentication vs Windows Authentication mode.
  • You cannot disable windows authentication
  • Windows authentication, SA account is created but disabled (sys generated password)
  • Change to mixed – enable SA (alter login SA enable, alter login sa with password), create password, restart SQL
  • Win auth recommended – better security.
  • SQL server logins (user must change pswd, expiration and enforce policy)
  • SQL authentication good for (cross domain, 3rd party apps, mixed OS, distributed apps etc)
  • SQL authentication not good for (maintenance, no Kerberos, no additional win policy features)

Permissions

  • Consider hierarchy of securable, what level you want to apply permission
  • Can apply to DB -> Schema -> Object
  • Permissions (select, view change tracking, update, references, insert, delete, execute, receive, view definition, alter, take ownership and control)

Cross-Database Ownership Chaining

  • Enable at instance level
  • Sp_configure ‘cross db ownership chaining’ ‘1’ or ‘0’
  • ALTER or CREATE stmt on database (SET)
  • Grant permissions on a view, based on two tables, the view permissions are used not the permissions on the individual tables. Slight performance increase.

Object Permissions

  • Can specify an execution context
  • Run code under a specific user, not user who is calling it
  • Only need to grant permissions to the object called.
  • Functions, procedures, triggers – execute as….
    • Caller (before 2008, always caller)
    • Self
    • Owner
    • ‘user_name’
    • ‘login_name’
  • ‘Execute as’ replaces ‘set user’

Auditing

  • SQL Enterprise edition only
  • Stored in File, Event viewer application, Event viewer security
  • CREATE SERVER AUDIT SPECIFICATION
  • CREATE DATABASE AUDIT SPECIFICATION
  • .sqlaudit extension
  • DB Audit (create, alter, drop, insert, update, delete)
  • Process: (1) Create server audit (2) alter server audit – enable it (3) then create the db audit

Change Data Capture (CDC)

  • New SQL 2008 (dev, enterprise and eval only)
  • Capture insert, update and delete
  • Captures column information and metadata
  • Stored in change tables, similar to tracked table
  • Sys.sp_cdc_enable_db_change_data_Capture
  • Sys.sp_cdc_enable_table_change_data_Capture
  • Sys.databases and sys.tables have fields (is_cdc_enabled, is_tracked_by_cdc)

DDL Triggers

  • Keep track of structural changes
  • Fire after the events and are transactional, can be rolled back
  • Prevent column changes or dropped tables
  • Can be set at db or server level
  • Server: Sys.server_triggers  ; database: sys.triggers

SQL Server Configuration Manager

  • SQL 2005 surface area config tool replaced by SQL Server config manager
  • Available as facet in server instance
  • Secure by default
  • Sp_configure

Security and SQL Agent

  • SQL Agent proxies; Proxy based on credential

Service Accounts and Permissions

  • Few changes in 2008, groups are still created at install, but no rights in SQL
  • Only accounts selected during install, have rights granted.
  • Use a domain account for sql service. Should not be local admin.
  • Should not use network account as it’s shareable.
  • Cannot use local service account for SQL or Agent

Thanks for reading.

BCP – password prompt?

December 24, 2012

BCP is a very useful tool.

How and why we use it is explained in the following link and in BOL: http://msdn.microsoft.com/en-us/library/ms162802.aspx

I wanted to write about a small little issue that you may come across which can drive you up the wall. Below is a simple example, where I’ve launched the command prompt and run the following command which exports the contents of table testable to a csv file test.csv

bcp.exe TEST..TestTable out “d:\temp\test.csv” -c -t, -T -Slocalhost

BCP

Notice the password: prompt? Even though I’ve specified –T which tells the bcp utility to run using trusted authentication.

“- T specifies that the bcp utility connects to SQL Server with a trusted connection using integrated security. The security credentials of the network user, login_id, and password are not required. If –T is not specified, you need to specify –U and –P to successfully log in.”– Microsoft

What password? I’m using the service account which is a local windows administrator and a system administrator in SQL Server and I’m exporting to a local folder – so what f@ck3n! Password! Turns out that even though it’s prompting for a password, the real problem is actually the path that you are executing bcp from, in my case: c:\documents and settings\serviceaccount\

Press Ctrl + C to abort the above BCP command:

BCP CTRL C

Type cd\ {enter}

and then change directory to the physical location of the bcp executable

cd program files\microsoft sql server\90\tools\binn

and now run the exact same command

bcp.exe TEST..TestTable out “d:\temp\test.csv” -c -t, -T –Slocalhost

BCP Success

Success!

It’s the kind of thing that can catch you out and waste more time that it really needs to. BCP is also a utility that you may not use very often so unexpected stumbling blocks like this can really stump you for a bit.

If you’re using the command prompt you may figure out the problem fairly quickly. However, in my case I was writing a stored procedure, using the code below, and when running the query it never completed or failed. It seemed to run forever. Sp_who2 showed no signs of blocking.

All along it was waiting for a password that would never be entered.

BCP T-SQL

Merry Christmas and thanks for reading!