• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/1113

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

1113 Cards in this Set

  • Front
  • Back
Questions
Answers
Chapter 1: Lesson 1 -- Determining Hardware and Software Requirements
xx
(D) Windows Server 2008 Enterprise"
"Correct Answers: C and D
(D) Windows XP Tablet Edition SP2"
"Correct Answer: B
Exam Tip
SQL Server 2008 Server is not supported on Windows Server 2008 Server Core. Windows Server 2008 Server Core is not supported because the .NET Framework is not supported on Server Core. SQL Server 2008 relies on .NET Framework capabilities to support FILESTREAM, SPATIAL, and DATE data type, along with several additional features.
What edition of Windows Server 2008 is not supported for SQL Server 2008 installations?
Windows Server 2008 Server Core is not supported for SQL Server 2008 installations.
Which operating systems are supporting for all editions of SQL Server?
Windows Server 2003 Standard SP2 or higher, Windows Server 2008 Standard Rc0 or higher.
Chapter 1: Lesson 2 -- Selecting SQL Server Editions
Understand the differences between SQL Server 2008 Enterprise, Workgroup, Standard, and Express; Understand the role of each service that ships within the SQL Server 2008 data platform.
Quick Check: Which editions of SQL Server are designed as storage engines for embedded applications with limited hardware and feature support?
Express and Compact editions are designed as storage engines for embedded applications and support only a single CPU, up to 1 GB of RAM, and a maximum database size of 4 GB.
Quick Check: Which editions support the entire feature set available within the SQL Server data platform? Of these editions, which editions are not licensed for production use?
Enterprise, Developer, and Evaluation editions have the entire set of features available within the SQL Server 2008 data platform. Developer and Evaluation editions are not licensed for use in a production environment.
(D) Compact"
"Correct Answers: B and C
(D) SQL Server 2008 Enterprise"
"Correct Answer: D
Exam Tip
For the exam, you need to understand the basic design goals for each edition of SQL Server. You also need to know the feature set, memory, and processor support differences between editions.
What are the main Services in SQL Server 2008
Service Broker, SSIS (Sql Server Integration Service), SSRS (SQL Server Reporting Services), SSAS (SQL Server Analysis Services)
What editions of SQL Server are available?
Enterprise, Standard, Workgroup, Express, Compact, Developer, Evaluation
Chapter 1: Lesson 3: Installing and Configuring SQL Server Instances
xx
Quick Check: What are the authentication modes that SQL Server can be configured with?
You can configure SQL Server to operate under either Windows Only or Windows and SQL Server authentication modes.
Quick Check: Which edition of SQL Server supports installing more than one instance of SQL Server on a machine?
Only SQL Server Enterprise supports multiple instances on the same machine.
(D) sp_filestream_configure"
"Correct Answer: D
(D) SQL Server Surface Area Configuration Manager"
"Correct Answer: C
Exam Tip
You will need to know how collation sequences affect the way SQL Server Stores and handles character data.
NOTE: Windows Service Control Applet
Applications can broadcast a special command, called an enumeration request, across a network to locate any servers running SQL Server that are on the network. Although being able to enumerate servers running SQL Server is valuable in development and testing environments where instances can appear, disappear, and be rebuilt on a relatively frequent basis, enumeration is not desirable in a production environment. By disabling enumeration responses by setting the Hide Instance to Yes, you prevent someone from using discovery techniques to locate servers running SQL Server for a possible attack.
How many instances does SQL Server support?
50 on a single machine
What does the collation sequence do?
The Collation Sequence controls how SQL Server stores and manages character-based data.
Which versions of SQL Server support multiple instances?
Enterprise, Developer.
CAUTION: Windows Service Control Applet
The Windows Service Control applet also has entries for SQL Server services and allows you to change service accounts and passwords. You should never change service accounts or service account passwords using the Windows Service Control applet. SQL Server Configuration Manager needs to be used, because it includes the code to regenerate the service master key that is critical to the operation of SQL Server services.
Chapter 1: Lesson 4: Configuring Database Mail
Database mail provide a notification capability to SQL Server instances. In this lesson, you learn about the features of Database Mail and how to configure Database Mail within a SQL Server 2008 instance.
Quick Check: What are the two basic components of Database Mail?
Database mail uses mail profiles which can contain one or more mail accounts.
Quick Check: What are the two types of mail profiles that can be created?
Mail profiles can be configured as either public or private.
(D) Activity Monitor"
"1. Correct Answers: B and C
(D) Designate the mail profile as the default"
"2. Correct Answers: C and D
Lesson Summary:
Database Mail is used to send mail messages from a SQL Server instance. To Send mail messages, SQL Server Agent must be running. A mail profile can contain one or more mail accounts. You can create either public or private mail profiles. The mail profile designated as the default profile will be used to send mail messages if a profile is not specified.
TIP: Sending Mail
Database Mail utilizes the services of SQL Server Agent to send messages as a background process. If SQL Server Agent is not running, messages will accumulate in a queue within the Msdb database.
Collation Sequence
A combination of the character set that is supported and the settings for how to handle uppercase, lowercase, and accent marks.
Data Mining
An analytics engine that is part of SSAS which uses applied statistics to make predictions about data.
Database Mail
Allows mail messages to be sent from a SQL Server Instance
Mail Profile
A collection of one or more mail account that is used to send messages using Database Mail
Case Scenario -- Page 33
XX
Chapter 2: Lesson 1: Configuring Files and Filegroups
Data within a database is stored on disk in one or more data files. Prior to being written to the data files, every transaction is written to a transaction log file. In this lesson, you learn how to design the data files underneath a database, group the files into filegroups to link physical storage into a database, and manage the transaction log. You also learn how to configure the tempdb database for optimal performance.
Quick Check: What are the types of files that you create for databases and what are the commonly used file extensions?
You can create data and log files for a database. Data files commonly have either an .mdf or .ndf extension, whereas log files have an .ldf extension.
Quick Check: What is the purpose of the transaction long?
The transaction log records every change that occurs within a database to persist all transactions to disk.
D. Grant select permission on the database to all users and revoke insert, update, and delete permissions from all users on the database."
"1. Correct Answer: C
Exam Tip
The maximum transaction throughput for any database is bound by the amount of data per second that SQL Server can write to the transaction log.
Lesson Summary:
"You can define one or more data and log files for the physical storage of a database.
Chapter 2: Lesson 2: Configuring Database Options
Data within a database is stored on disk in one or more data files. Prior to being written to the data files, every transaction is written to a transaction log file. In this lesson, you learn how to design the data files underneath a database, group the files into filegroups to link physical storage into a database, and manage the transaction log.
Quick Check: How do you restrict database access to members of the db_owner role and terminate all active transactions and connection at the same time?
You would execute the following command: ALTER DATABASE <database name> SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE
Quick Check: What backups can be executed for a database in each of the recovery models?
You can create full, differential, and file/filegroup backups in the Simple recovery model. The bulk-logged recovery model allows you to execute types of backups, but you cannot restore a database to a point in time during an interval when a minimally logged transaction is executing. All types of backups can be executed in the Full recovery model.
D. Configure the database in the Full recovery model"
"1. Correct Answer: D
Lesson Summary:
"You can set the recovery model for a database to Full, Bulk-Logged, or Simple.
Recovery Options
The recovery options determine the behavior of the transaction log and how damaged pages are handled.
What are the three recovery models?
Full, Bulk-Logged, and Simple..
What is the best practice for production database recovery models?
Every production database that accepts transactions should be set to the Full recovery model. By placing the database in the Full recovery model, you can maximize the restore options that are possible.
What type of backups can you do in the Bulk Recovery Model?
Full, Differential and Yes/No minimally logged.
What type of backups can you do in the Full Recovery Model?
Full, Differential and Transaction Log Backups
What type of backups can you do in the Simple Recovery Model?
Full, Differential and No transaction logs
Chapter 2: Lesson 3: Maintaining Database Integrity
xx
Quick Check: What checks does DBCC CHECKDB perform?
DBCC CHECKDB checks the logical and physical integrity of every table, index, and indexed view within the database, along with the contents of every indexed view, page allocations, Service Broker data, and database catalog.
Quick Check: Which option should be enabled for all production databases?
You should set the PAGE_VERIFY CHECKSUM option for all production databases.
D. DBCC FREEPROCCACHE"
"Correct Answers: A and C
Best Practices: Page Verification
You should enable the PAGE_VERIFY CHECKSUM option on every production database.
Note: Database Mirroring
If the database is participating in Database Mirroring, SQL Server attempts to retrieve a copy of the page from the mirror. If the page can be retrieved from the mirror and has the correct page contents, the page is replaced automatically on the principal without requiring any intervention. When SQL Server replaces a corrupt page from the mirror, an entry is written into the sys.dm_db_mirroring_auto_page_repair view.
Corrupt Page
A page that has become inconsistent due to a failure of the disk subsystem while the page was being written.
Filegroup
A logical definition for one or more data files. A filegroup defines a storage boundary for objects.
Recovery Model
A database option that controls the types of backups that can be performed along with the restore options that are possible.
Chapter 2 - Case Scenario page 57
xx
Chapter 3: Lesson 1: Creating Tables:
xx
Quick Check: How do you design a database?
The ruling principle for designing a database is "Put thing where they belong." If the need is to store multiple rows of information that link back to a single entity, you need a separate table for those rows. Otherwise, each table defines a major object for which you want to store data and the columns within the table define the specific data that you want to store.
Quick Check: What are three new options that you can configure for columns, rows, or pages within a table?
You can designate columns as SPARSE to optimize the storage of NULLs. You can apply the FILESTREAM property to VARBINARY(MAX) column to enable the storage of documents in a directory on the operating system that exceed 2 GB. Rows can be compressed to fit more rows on a page. Pages can be compressed to reduce the amount of storage space required for the table, index, or indexed view.
D. A VARBINARY(MAX) column with the FILESTREAM property"
"Which options are not compatible with row or page compression? (Choose two. Each forms a separate answer.)
Exam Tip:
A FILEGROUP designated for FILESTREAM storage is off-line and inaccessible within a Database Snapshot. In addition, you cannot implement Database Mirroring against a database containing data stored with FILESTREAM.
Lesson Summary:
"* Schemas allow you to group related objects together as well as provide a security container for objects.
Note: Creating Schemas
The CREATE SCHEMA statement supports the creation of a schema along with the creation of tables and views and the assignment of permissions in a single statement. Creating code within SQL Server is not an obfuscation exercise, nor is it an exercise in trying to figure out the fewest statements you can construct to achieve your goals. Someone else usually has to maintain your code, and saving a couple of extra steps to create a more maintainable script is advisable. Therefore, it is recommended that you do not create tables and view or assign permissions within a CREATE SCHEMA statement. Any CREATE SCHEMA statement that is executed must be in a separate batch.
Note: Numeric and Decimal Data Types
The data types NUMERIC and DECIMAL are exactly equivalent. Both data types still exist within SQL Server for backwards compatibility purposes.
Note: Numeric Precision
FLOAT and REAL data types are classified as approximate numerics, or floating point numbers. The value stored within a float or real column depends upon the processor architecture that is used. Moving a database from a server with an Intel chipset to one with an AMD chipset, or vice versa, can produce different results in these columns. If you are utilizing FLOAT and REAL due to the range of values supported, you must account for compounding error factors in any calculation that you perform.
Note: Row Size Limitation
"If the maximum size of a row in your table exceeds 4,009 bytes, you cannot issue an ALTER statement to either change a column to SPARSE or add an additional SPARSE column. During the ALTER, each row is recomputed by writing a second copy of the row on the same data page. Because two copies of a row that exceed 4,009 bytes would exceed the 8,018 bytes allowed per page, the ALTER TABLE statement fails. The only workaround to this storage design issue are the following:
Storage Space:"
"Data Type: BIGINT
Storage Space:"
"Data Type: DECIMAL (P,S) and NUMERIC (P,S)
Storage Space:"
"Data Type: FLOAT(N)
Storage Space:"
"Data Type: INT
Storage Space:"
"Data Type: MONEY
Storage Space:"
"Data Type: REAL
Storage Space:"
"Data Type: SMALLINT
Storage Space:"
"Data Type: SMALLMONEY
Storage Space:"
"Data Type: TINYINT
Chapter 3: Lesson 2: Implementing Constraints
xx
Quick Check: What is the difference between a primary key and a unique constraint?
A primary key does not allow NULLs.
Quick Check: What restrictions does the parent table have when creating a foreign key?
The parent table must have a primary key that is used to define the relationship between the parent and child tables. In addition, if the parent's primary key is defined on multiple columns, all the columns must exist in the child table for the foreign key to be created.
D. COLLATE"
"Columns with which properties cannot be sparse columns? (Choose 2. Each forms a separate answer)
Exam Tip:
The default option for a primary key is clustered. When a clustered primary key is created on a table that is compressed, the compression option is applied to the primary key when the table is rebuilt.
Exam tip:
Although a NULL does not equal another NULL and NULLs cannot be compared, a unique constraint treats a NULL as it does any other data value. If the unique constraint is defined on a single column, then a single row within the table is allowed to have a NULL within that column, then a single row within the table is allowed to have a NULL within that column. If the unique constraint is defined across more than one column, then you can store NULLs within the columns so long as you do not produce a duplicate across the combination of NULLs and actual data values.
Lesson Summary:
"* A primary key defines the column(s) that uniquely identify each row in a table.
Check Constraint
Limits the range of possible values that are allowed in a column.
Default Constraint
Supplies a value for a column when the application does not specify the column for an INSERT.
FILESTREAM
A property that enables SQL Server to store files in a specified directory on the operating system while still treating files as part of the database for management, backup and restore purposes.
Foreign Key
Enforces referential integrity between two tables.
Identity Column
A column with the IDENTITY property that automatically supplies an incrementing value when rows are inserted into the table.
Primary Key
Designates the column(s) within a table that uniquely identify every row. A primary key does not allow NULLs
Schema
An object within a database that every object is owned by. An object cannot be created outside a schema.
Schema Collection
One or more XML schemas that are loaded into SQL Server as a group. XML schemas are used to validate the contents of XML columns.
Sparse Column
A property applied to a column that allows NULLs which optimizes storage by not requiring any space for the NULLs stored within the column.
Unique constraint
Designates the column(s) whose values are required to be unique within the table. A unique index differs slightly from a primary key because columns for a unique index allow NULLs.
Caution: Cascading
One of the options for a foreign key is CASCADE. You can configure a foreign key such that modifications of the parent table are cascaded to the child table. For example, when you delete a customer, SQL Server also deletes all the customer's associated orders. Cascading is an extremely bad idea. It is very common to have foreign keys defined between all the tables within a database. If you were to issue a DELETE statement without a WHERE clause against the wrong table, you could eliminate every row, in every table within your database, very quickly. By leaving the CASCADE option off for a foreign key, if you attempt to delete a parent row that is referenced, you get an error.
Chapter 4: Lesson 1: Index Architecture
xx
Quick Check: What are the three types of pages within an index?
An index can contain root, intermediate and leaf pages. An index has a single root page defined at the top of the index structure. An index can have one or more levels of intermediate pages, but it is optional. The leaf pages are the lowest-level page within an index.
Quick Check: What type of structure does SQL Server use to construct an index?
SQL Server uses a B-Tree structure for indexes.
D. CREATE NONCLUSTERED INDEX idx_product ON Products.Product (ProductName, SKU, ProductID, ListPrice, ShortDescription)"
"Correct Answers: B and C
Exam tip:
If you are creating an index on a sparse column, you should use a filtered index to create the most compact and efficient index possible.
Lesson Summary:
"* SQL Server creates an index using a B-Tree structure.
Note: Index Entry Storage
Pages in SQL Server can store up to 8,060 bytes of data. So an index created on a column with an INT data type can store 2,015 values on a single page within an index, whereas an index based on a column with a datetime2 data type can store only about half as many values per page, or 1,007 values per page.
Note: Page Splits
Keep in mind that rows on an index page are maintained in sorted order, so SQL Server always writes any new entries into the correct sorted location when page splitting. This can cause rows to move between pages, and page splits can occur at any level within the storage structure.
Chapter 4: Lesson 2: Designing Indexes
xx
Quick Check: How does the FILLFACTOR option affect the way an index is built?
The FILLFACTOR option reserves space on the intermediate and leaf levels of the index.
Quick Check: What is the difference between and clustered and non clustered index?
A clustered index imposes a sort order on the data pages in the table. A nonclustered index does not impose a sort order.
D. A filtered, nonclustered index on Color that includes the ProductType and WoodSpecies columns"
"1. Correct Answer: D
Exam Tip. Online Indexing/Rebuild/Restore
Online operations such as online index creation / rebuild or online restore are available only in SQL Server 2008 enterprise.
Lesson Summary
"* Clustered indexes specify a sort order for data pages in a table.
Note: FILESTREAM Data
The FILESTREAM_ON clause is used when clustered indexes are created on a table containing FILESTREAM data. If you specify a different filegroup in the FILESTREAM_ON clause than where the FILESTREAM data is currently located, all the FILESTREAM data will be moved to the newly specified filegroup during the creation of the clustered index.
Because XML columns contain such large amounts of data -- how can you improve performance of queries again XML Data?
You can create a special type of index called an XML Index.
How are non-clustered indexes different from clustered indexes?
They do no impose a sort order on the table. Because of this (unlike clustered indexes), you can create multiple non-clustered indexes on the same table.
How can you create covering indexes for greater than 16 columns and greater than 900 bytes?
Indexes can be created using the INCLUDE clause that exceed those limitations.
How do filtered indexes work?
Only the index keys matching the where clause are added to the index.
How do Indexes created using the INCLUDE clause get around the Index size limitations?
Included columns become part of the index at only the leaf level. Values from included columns do not appear in the root or intermediate levels of an index and do not count against the 900 byte limit for an index.
How does a clustered index determine the physical order of data on a disk?
Trick Question: It does not -- it just ensures that the page chain of the index is sorted logically -- allowing SQL Server to traverse directly down the page chain to locate data.
How does FILLFACTOR apply to the Root and Intermediate level pages of an index?
It doesn't. It only applies at the Leaf level of an index.
How does pointers improve the performance of a table?
Trick Question: They don't. Although the presence of a handful of forwarding pointers for a table is generally not a concern, having a large number of forwarding pointers can cause severe performance degradation.
How is a clustered index special?
A clustering index is special because it causes SQL server to arrange the data in the table according to the clustering key.
How is a filtered index different from a regular index?
It uses a WHERE clause.
How many clustered indexes can you have per table?
Only one, because a table can only be sorted one way, and a clustering index imposes a sort order on a table.
How much data can be contained in a single column using the XML Data Type?
2 GB of Data.
How much space does SQL Server reserve at the Root and Intermediate levels of an index?
It only ever reserves enough space for approximately one additional row to be added.
If you don't want to consume space in the database where an index is created, what can you do to prevent this?
You can use the SORT_IN_TEMPDB option. Which causes the work tables for the sort operations to be generated in the tempdb database.
If you know you're going to be introducing large numbers of leaf-level pages, which in turn will cause page splits on the intermediate levels and potentially the root page -- what command can you use to improve performance?
You can use the PAD_INDEX option, which causes the FILLFACTOR to be applied to the intermediate level pages and the root page of an index.
The leaf level of a clustered index contains the data -- what does the leaf level of non-cluster index contain?
It contains a pointer to the data.
What are the columns called that a defined for a clustering index?
Clustering key.
What are the maximum columns and byte size for a non-clustered index?
The same as the limits for clustered indexes, max of 16 columns and 900 bytes.
What are the other types of indexes -- besides clustered indexes?
The only other type of index is a non-clustered index.
What are the restrictions of a filtered index?
"1. The must be nonclustered indexes
What are the two types of XML indexes?
Primary and secondary.
What causes the selectivity of an index to increase?
An increased number of unique values.
What command can you use during a multi-row insert operation to ensure that duplicate values don't cause the entire insert to be rejected (just the offending row)?
IGNORE_DUP_KEY option has to be enabled. A warning will be generated, but only those rows that are duplicates will be rejected.
What command can you use during the creation of an index to lock the entire table to prevent changes while the index is created?
WITH ONLINE = OFF
What do you call a table without a clustered index/
A table without a clustered index is referred to as a heap.
What does FILLFACTOR specify?
It specifies the percentage of free space that should be left on the leaf level of an index during creation or rebuild. By leaving space on the leaf level, you can write a small number of rows to a leaf-level page before a page split is required, thereby slowing the rate of fragmentation of an index.
What does the FILLFACTOR do to improve database performance?
It controls the rate at which page splits occur.
What does the ON clause specify in the creation of an index?
It specifies the filegroup that the index is created on. However, because the leaf level of a clustered index is the row of data in the table, the table and the clustered index are always stored on the same filegroup.
What is special about covering indexes?
They can be used to completely satisfy all of the queries for a given tables -- as they contain all of the data for an entire table.
What is the component used by SQL Server to determine whether an index should even be used to satisfy a query?
Query optimizer.
What is the default option in SQL Server for a Primary key?
Clustered. Unless you specify otherwise, SQL Server creates a clustered index for a primary key. Likewise, a unique constraint is physically implemented as a unique index. Because a primary key is also unique by default unless it is specified as nonclustered, SQL Server physically implements each primary key as a unique, clustered index.
what is the different in table locks between the creation of a clustered vs. non-clustered index?
When a table is locked during the creation of a clustered index, it locks the table from both select statements and data modifications. When a table is locked for a non-clustered index, you are allowed selected statements, but you cannot modify the data.
What is the main purpose of filtered indexes?
To handle the cases where significant skew exists in the data.
What is the maximum number of columns that you can define an index with?
16 Columns
What is the maximum number of non-clustered indexes on a table?
1,000 non-clustered indexes per table.
What is the maximum size of the index key?
900 bytes
What is the most important option that can be specified during the creation of an index?
FILLFACTOR
What is the name given to an index that contains all of the columns of data in a table?
Covering index
What is the name given to the degree to which values in the column allow you to locate small sets of data?
The selectivity of an index.
What is the name given to the structure created when an index is created that stores information about the relative distribution of data values within a column?
Histogram
What is the ONLINE default setting during index creation?
Off
Which versions of SQL Server support online index creation?
Enterprise.
Why should you sometimes use covering indexes?
If you can construct covering indexes for frequently accessed data, you can increase the response time for queries by avoiding additional reads to the underlying tables.
T/F -- Certain tables should never have clustering indexes.
False -- Every table should have a clustered index. One of the main purposes of a clustered index is to eliminate forwarding pointers.
Chapter 4: Lesson 3: Maintaining Indexes
xx
Quick Check: What happens when an index is disabled?
An index that is disabled is no longer used by the optimizer. In addition, as data changes in the table, any disabled index is not maintained.
Quick Check: What is the difference between the REBUILD and REORGANIZE options of the ALTER INDEX?
REBUILD defragments all levels of an index. REORGANIZE defragments only the leaf level of the index.
D. Use Integration Services to import the data"
"Correct Answer: C
Lesson Summary:
"* You can defragment indexes using either the REBUILD or REORGANIZE options.
Balanced Tree (B-Tree)
A symmetric, linear structure used to construct an index. A B-Tree provides a compact structure that enables searching very large volumes of data with a small number of read operations.
Clustered Index
An index that imposes a sort order on the pages within the index. A table can have only one clustered index.
Covering Index
An index that allows a query to be satisfied by using only the entries within the index.
Nonclustered Index
An index that does not impose a sort order on the data pages within the table. You can have up to 1,000 nonclustered indexes in a table.
Tesselation
The process that is used to construct a spatial index. Tessellation counts the cells that a spatial object touches within the four-level grid hierarchy.
How do you defragment an index in SQL Server?
you use the ALTER INDEX command.
How does the REBUILD option of the ALTER INDEX command work?
The REBUILD option rebuilds all levels of the index and leaves all pages filled according to the FILLFACTOR setting of an index. If you rebuild the clustered index, only the clustered index is rebuilt. However, rebuilding the clustered index with the ALL keyword also rebuilds all nonclustered indexes on the table. The rebuild of an index effectively recreates the entire B-Tree structure, so unless you specify the ONLINE option, a shared table lock is acquired, preventing any changes until the rebuild operation completes.
How does the REORGANIZE option of the ALTER INDEX command work?
The REORGANIZE option removes fragmentation only at the leaf level. Intermediate level pages and the root page are not defragmented during the reorganize. REORGANIZE is always an online operation that does not incur any long-term blocking.
What command do you use to enable an index that has been previously disabled?
ALTER INDEX REBUILD
What does the FILLFACTOR option for an index do?
It determines the percentage of free space that is reserved on each leaf level page of the index when an index is created or rebuilt. The extra free space reduces the rate at which page split occurs.
What does the FILLFACTOR represent?
It represent the percentage full. IE, Fill factor of 75 means that 25 percentage of the space on each leaf-level page is left empty to accommodate future values.
What happens to a table when a clustered index is disabled?
The entire table becomes inaccessible.
What is the automatic mechanism that SQL server uses to reduce page and index fragmentation?
Trick question -- none. Because SQL Server does not reclaim space, you must periodically reclaim the empty space in an index to preserve the performance benefits of an index.
When defragmenting an index using the ALTER INDEX command, what are your options?
REBUILD and REORGANIZE.
Why do you need to manage the fragmentation of an index over time?
Because the data within an index is stored in a sorted order -- which can move around (causing page splits) or from changes in values.
Chapter 5: Lesson 1: Creating and Populating Full Text Indexes
xx
Quick Check: Before you can create a full text index, which structure do you need to create?
Full text indexes are contained within full text catalogs. Therefore, you must create a full text catalog prior to creating a full text index.
Quick Check: What do you need to specify to create a full text index on documents stored within a FILESTREAM?
SQL Server stores the documents within a VARBINARY(MAX) column with the FILESTREAM property enabled. In order to create a full text index, you need to also specify a type column that designates what type of document is stored in the VARBINARY(MAX) column to load the appropriate filter for the word breaker to use.
D. Create a full text index on the table of product descriptions for the description column and specify CHANGE_TRACKING AUTO."
"1. Correct Answers: B and D
D. ALTER FULLTEXT INDEX ON <table_name> START UPDATE POPULATION."
"Correction Answer C:
Exam Tip: Full Text Indexes
Prior to SQL Server 2008, you associated the full text catalog with a filegroup only for backup purposes, with the entire contents of the catalog maintained in a directory structure on the operating system. In SQL Server 2008, Microsoft eliminated the external file structure, and the contents of a full text catalog are now stored within the database.
Lesson Summary:
"* Before creating a full text index, you must create a full text catalog that is mapped to a filegroup to contain one or more full text indexes.
Note: Filegroup Placement
Although it is possible to store a full text catalog on a filegroup that also contains relational data, it is recommended that you create a separate filegroup for full text indexes to separate the input/output (I/O) against the full text catalog from that of the relational data.
In SQL Server 2008, why is the IN PATH clause no longer used with regards to Full-Text indexes?
Because the IN PATH clause has been deprecated and should no longer be used because full-text indexes are now stored within the database.
Point of differentiation -- Full-text index vs. Relational Index
Unlike a relational index, population of a full text index is not an immediate process because the data has to be submitted to the indexing engine, when then applies word breakers, stemmers, language files and stop lists before merging the changes into the index.
What are the options in SQL Server for maintaining a list of changes to the indexed data for the change tracking feature of full-text indexing?
MANUAL or AUTO
What data column types can have full-text indexes created for them?
CHAR/VARCHAR, XML and VARBINARY
What is a full-text index?
It is an index that gives you the ability to query large volumes of unstructured data rapidly.
What is the first step in building a full-text index?
You create a storage structure. Full text indexes have a unique internal structure that is maintained within a separate storage format.
What is the impact of setting CHANGE_TRACKING to OFF in the full-text index engine of SQL Server?
Whenever you want to update the index to the reflect the changes in the underlying data, you have to repopulate the index completely.
What is the impact of setting the change tracking option to AUTO in the full-text index engine?
The SQL Server automatically updates the full text index as the data is modified.
What is the impact of setting the change tracking option to MANUAL in the full-text index engine?
It means that you are responsible for periodically propagating the changes into the full text index?
What is the most common use of the VARBINARY(MAX) column?
To store documents using the new FILESTREAM capabilities in SQL Server 2008.
What is the name given to the storage format that supports full-text indexes?
full text catalog. Each full-text catalog contains one or more full-text indexes.
What option is available to the full text index that determines how SQL Server maintains the index when the underlying data changes?
CHANGE_TRACKING
What parameter is the single column within the table or indexed view that uniquely identifies a row?
The KEY INDEX
What parameter is used to help you designate the column that contains the filter type for the full text index engine?
The TYPE COLUMN
What parameter is used to help you designate the language of the data being indexed by the full text index engine?
LANGUAGE
When creating full-text indexes on which data column types can the full-text engine parse the data directly and build an appropriate index?
CHAR/VARCHAR
When processing a VARBINARY(MAX) column, what is the other critical thing you have to do?
You have to specify a column that designates the type of document so that the full text parser can the appropriate assembly. This allows the full text indexer to appropriately read the document and make the content available to the full-text index.
Quick Check: Which predicate is used to perform proximity and synonym searches?
CONTAINS and CONTAINSTABLE are used for proximity, thesaurus, and inflectional searches.
Quick Check: Which predicate performs fuzzy searching by default?
FREETEXT and FREETEXTTABLE predicates perform wildcard searches by default.
D. FREETEXTTABLE"
"1. Correct Answers: A and C
D. ISABOUT with a WEIGHT keyword"
"2. Correct Answer: D
Exam Tip:
All search terms used with full text are unicode strings. If you pass in a non-Unicode string, the query still works, but it is much less efficient because the optimizer cannot use parameter sniffing to evaluate distribution statistics on the full text index. Make certain that all terms you pas in for full text search are always typed as Unicode for maximum performance.
Lesson Summary: Querying Full-Text Data
"* The FREETEXT and CONTAINS predicates return a value of True or False, which you can then use in a query similar to an EXISTS clause to restrict a result set.
Note: Language Parameters
Although you can employ a different language for a query than an index was built upon, you will not automatically improve the accuracy of a full text search. Because stemmers are language-specific, if the index was built with a language different from the language specified in the query, you will not be able to find any inflectional forms of words with your query.
Note: Thesaurus Files
A thesaurus file exists for each supported language. All thesaurus files are XML files stored in the FTDATA directory underneath your default SQL Server installation path. The thesaurus files are not populated, so to perform synonym searches, you need to populate the thesaurus files. You will learn about thesaurus files in Lesson 3. "Managing Full Text Indexes"
Chapter 5: Lesson 2: Querying Full Text Data
xx
For queries of full-text data that require greater flexibility, which predicate should you use?
CONTAINS
In addition to CONTAINS and FREETEXT, what are the two additional commands that produce a result set with addition columns of information?
CONTAINSTABLE and FREETEXTTABLE
In full text index searches -- what keyword is used to do proximity searching?
NEAR
What are the two commands that SQL Server provides to query full text data?
CONTAINS and FREETEXT
Why?
Because the CONTAINS predicate allows you to (1) Search Word Forms (2) Search for Word proximity and (3) Provide relative weighting to terms
Quick Check: What do you configure to exclude words from your index and search arguments?
A stop list contains the list of words that you want excluded from a full text index as well as from search arguments.
Quick Check: Which type of files enable searching based on synonyms?
A thesaurus file allows you to configure synonyms for search arguments
D. Parse the inbound query and remove any common words from the search arguments."
"1. Correct Answer: A
Exam Tip: Stop Lists
Although many features of SQL Server operate the same from one version to another, others are enhanced or changed. You can assume that you will have questions on an exam which are designed to test whether you know the change in behavior for the new version. In SQL Server 2005 and prior versions, you configured noise word files that were in the FTDATA directory. In SQL Server 2008, you configure stop lists that are contained within a database in SQL Server. it is very likely that if you have a question concerning the configuration of stop words, the available answers will include both the SQL Server 2005 and SQL Server 2008 behaviors and any of the SQL Server 2005 behaviors would be incorrect answers.
Lesson Summary: Managing Full Text indexes
"* You manage thesaurus files by editing the language-specific file that is contained within the FTDATA directory for the instance.
Full Text Catalog
The storage container for full text indexes.
Full Text Filter
An assembly that is loaded by the full text engine and that allows interpretation of various document formats stored within a VARBINARY (MAX) column.
Full Text Index
A specialized index for unstructured data. Full text indexes allow unstructured data to be queried across word forms, as well as with respect to work proximity with the data.
Stemmer
An assembly loaded by the full text indexing engine that produces language specific inflectional forms for verbs within the data being indexed.
Stop List
A list of words that the full text engine should ignore when the index is populated. A stop list allows you to filter out common terms within your industry or organization so that the index does not become polluted with words you do not want to search for.
Thesaurus File
An XML file that is created for a specific language for use with a full text index. Thesaurus files allow you to return synonyms of search terms, such as when you search for metal and the index also returns gold, silver, copper, aluminum, etc.
Word Breaker
A language specific assembly that locates word breaks so that individual words can be tokenized by the full text indexing engine.
A thesaurus can contain expansion sets or replacement sets. What is the difference of the two?
A replacement set defines a term or set of terms that are replaced in the search argument prior to the word breaker tokenizing the argument list. An expansion set defines a set of terms that are used to expand upon a search argument. When an expansion set is used, a match on any term within the expansion set causes SQL Serve to retrieve a row.
Chapter 5: Lesson 3: Managing Full Text Indexes
xx
What are Stop Lists?
They are used by SQL Server to exclude words that you do not want included in a full text index. You exclude words from an index that commonly occur so that valid, targeted results can be returned to validly formed searches.
What are the three options for populating a full-text index?
Full, Incremental and Update.
What are two ways to easily increase the usefulness of a full-text index?
Creating thesaurus files and building stop lists to filter out of the index?
What command would you issue to initiate the population of a full-text index?
ALTER FULLTEXT INDEX
What directory is the thesaurus stored in?
FTDATA directory
What do you have to do in SQL Server before it can begin using language-specific searches that automatically searches for synonyms?
You have to define the Thesaurus in SQL Server.
What does a thesaurus file enable you to do with full text queries?
it allows you to retrieve rows that match the search argument along with synonyms of a search argument.
What happens in a full-text index when you use the FULL option?
Reprocesses every row from the underlying data to rebuild the full text index completely.
What happens in a full-text index when you use the INCREMENTAL option?
Processes only the rows that have changed since the last population; requires a timestamp column on the table.
What happens in a full-text index when you use the UPDATE option?
Processes any changes since the last time the index was updated; requires that the CHANGE_TRACKING option is enabled for the index and set to MANUAL.
What type of file is the thesaurus file?
It is an XML file
What were Stop Lists known as in previous versions of SQL Server?
Noise word files.
Chapter 6: Lesson 1: Creating a Partition Function
xx
Quick Check: What data types cannot be used with partition functions?
You cannot use text, ntext, image, xml, varbinary(max), varchar(max) or any CLR data types.
Quick Check: What is the maximum number of partitions allowed for a table?
The maximum number of partitions for a table is 1,000.
Quick Chick: What is the maximum number of boundary points allowed for a partition function?
The maximum number of boundary points for a partition function is 999.
D. Database Mirroring"
"1. Correct Answer: C
Exam Tip: Creating Partition Functions
You can partition an existing object after it has been populated with data. To partition an existing table, you need to drop the clustered index and re-create the clustered index on the partition scheme. To partition an existing index or indexed view, drop the index and recreate the index on a partition scheme. You will want to be very careful when partitioning existing objects that already contain data, because implementing the partition will cause a significant amount of disk input/output (I/O).
Lesson Summary: Creating A Partition Function:
"* A partition function defines the boundary points for a set of partitions.
Note: Code Reuse
The definition of a partition function does not provide a clause for an object, column or storage. This means that a partition function is a stand-alone object that you can apply to multiple tables, indexes, or indexed views if you choose.
In a partition function how many partitions can you have for an object?
1,000 partitions for an object.
In a partition function what is the maximum number of boundary points?
999 boundary points.
In a partition function, where are null values stored?
In the left most partition.
In the creation of a partition function, how to you insure that any value stored in a column always evaluates to a single partition?
You have the partition function map the entire range of data -- with no gaps present. And you make sure than you do not specify duplicate boundary points.
In the creation of a partition function, what do you use to define which partition will include a boundary point?
The LEFT and RIGHT parameters.
What are the two things that every partition function requires?
Name and Data Type.
What data types can be used fro a partition function?
Any native SQL Server data type, except text, ntext, image, varbinary(max), timestamp, xml, and varchar(max).
What do you use in the creation of partition functions to specify whether the partition function is RANGE LEFT for RANGE RIGHT?
The AS clause.
What does a Partition Function define?
It defines a set of boundary points for which data will be partitioned.
What does Table Partitioning allow for?
It allows tables, indexes and indexed views to be created on multiple filegroups while also allowing DBA's to specify which portion of the object will be stored on a specific filegroup.
What does the data type do in a partition function?
It defines the limits of the boundary points that can be applied.
What does the FOR VALUES clause do in a partition function?
It is used to specify the boundary points for the partition function.
What is Table Partitioning?
Table partitioning is a means of splitting large tables across multiple storage structures.
What is the three step process for partitioning a table?
"1. Create a partition function
What must you do to imprecise data types such as real and computer columns to use them in the creation of partition functions?
They must be persisted.
T/F -- You can use Common Language Runtime (CLR) data types in the creation of Partition Functions?
False. You may not.
T/F -- You cannot use Transact-SQL user defined types in the creation of Partition Functions?
True. You cannot use them.
Chapter 6: Lesson 2: Creating a Partition Scheme
xx
Quick Check: Can you create a new filegroup at the same time that you are creating a partition scheme?
No. Any filegroups that you specify in the CREATE PARTITION SCHEME statement must already exist in the database.
Quick Check: How many filegroups can you specify if you use the ALL keyword when defining a partition scheme?
You can specify exactly one filegroup when using the ALL keyword.
D. Create a job to delete orders that are older than 30 days."
"1. Correct Answers: A and C
Exam Tip:
If you specify the ALL Keyword when creating a partition scheme, you can specify a maximum of one filegroup.
Lesson Summary: Create a Partition Scheme
"* A partition scheme is a storage definition containing a collection of filegroups.
How must a partition scheme be defined?
It must be defined in such a way as to contain a filegroup for each partition that is created by the partition function mapped to the partition scheme.
What is a Partition Scheme?
It defines the storage structures and collection of filegroups that you want to use with a given partition function.
Important Filegroups
Any filegroup specified in the CREATE PARTITION SCHEME must already exist in the database.
Chapter 6: Lesson 3: Creating Partitioned Tables and Indexes
xx
Quick Check: What property must be set to use a computed column as a partitioning key?
A computed column must be PERSISTED
Quick Check: Which clause of the CREATE TABLE or CREATE INDEX statements is used to partition the object?
The ON clause is used to specify the storage structure, filegroup or partition scheme, for the table or index.
D. Database snapshots"
"1. Correct Answer: B
Lesson Summary: Creating Partitioned Tables and Indexes
"* The ON clause is used to specify the storage structure, filegroup, or partition scheme to store a table or index.
Note: Partial Backup and Restore
Partitioning has an interesting management effect on your tables and indexes. Based on the definition of the partition function and partition scheme, it is possible to determine the set of rows which are contained in a given filegroup. By using this information, it is possible to back up and restore a portion of a table as well as to manipulate the data in a portion of a table without affecting any other part of the table.
What is the final step in creating a partitioned table or index?
The final step in creating a partitioned table or index is to create the table or index on a partition scheme through a partitioning column.
Whenever you're creating a Partitioned table or index -- what is the difference that when creating a regular table or index?
You include the ON clause, and specify the Partition Scheme -- as opposed to the FILEGROUP you would normally specify there during the creation of regular tables or indexes.
Chapter 6: Lesson 4: Managing Partitions
xx
Quick Check: Which operators are used to add or remove boundary points from a partition function?
The SPLIT operator is used to introduce a new boundary point. The MERGE operator is used to remove a boundary point.
Quick Check: Which operators is used to move partitions between tables?
The SWITCH operator is used to move partitions between tables.
D. Partitioning"
"1. Correct Answer: D
Exam Tip - Distributing and partitioning data
You need to know that in a SWITCH operation, you cannot move data from on filegroup to another or exchange two partitions with data.
Lesson Summary:
"* SPLIT is used to introduce a new boundary point to a partition function.
Index Alignment
An index that utilizes the same partition function as the table against which it is created.
Partition function
A definition of boundary points that is used to determine how data in a partitioned table is split.
Partition scheme
A definition of storage that is mapped to a partition function that determines where each portion of data will be stored.
Partitioning key
The column in the table that is used to partition.
How can you ensure that data and corresponding indexes are kept together as a single unit during a restore?
Keep them on the same disk partition.
What are the two operators available to manage the boundary point definitions of Partitioned Indexes and Partitioned Tables?
SPLIT and MERGE
What does "aligned" mean with regards to clustered indexes and tables?
It means that a table and all of its indexes are partitioned using the same partition functions.
What does the MERGE operator do?
The MERGE operator eliminates a boundary point from a partition function.
What does the SPLIT operator do?
The SPLIT operator introduces a new boundary point into a partition function.
Why must you partition the clustered index and the table the same way?
Because SQL Server cannot store the clustered index in a structure separate from the table.
Chapter 7: Lesson 1: Importing and Exporting Data
xx
Quick Check: The import and Export Wizard is based on which feature of SQL Server?
The Import and Export Wizard uses a subset of the SSIS feature.
Quick Check: What are the data formats that BCP supports and the command line switches for each format?
BCP can work with data in either a character or native format. The -c switch designates character mode while the -n switch is used for native mode.
Quick Check: Which parameter do you specify to export data using a query?
The queryout parameter is used to export the result set of a query.
Quick Check: Which sources and destinations is the Import and Export Wizard capable of using?
You can define any source or destination for which you have an OLE DB Provider.
D. A member of the bulkadmin role"
"Correct Answers: A and C
D. Move the data from Oracle to SQL Server using the OPENROWSET function."
"2. Correct Answer: C
Exam Tip: Importing and Exporting Data.
Exam Tip: The exam tests you on whether you know which import/export option is most appropriate to a given situation.
Lesson Summary: Importing and Exporting Data
"* BCP is a lightweight, command-line utility that allows you to import and export data.
Note: Enforcing Check Constraints and Triggers
When you import data into a table using BCP, triggers and check constraints are disabled by default. If you want to enforce check constraints and fire triggers during the import, you need to use the -h switch. If you do not disable triggers and check constraints during an import, you do no need ALTER TABLE permissions.
Bulk Copy Program
A command line utility that is used to import data from a file into a table, as well as export data to a file.
BULK INSERT
A T-SQL command that allows you to import data from a file into a table view.
What are the two main differences between the BCP program and the BULK INSERT command?
the BULK INSERT command cannot export data and the BULK INSERT is a T-SQL Command and does not need to specify the instance name or login credentials.
What does BCP stand for?
The Bulk Copy Program
What does the -c switch used with BCP specify?
It specifies that the data in the file is in a character format.
What does the -n switch used with BCP specify?
It specifies that the data in the file is in the native format of SQL Server.
What does the -T switch do when used with BCP?
it specifies a trusted connection and BCP uses the Windows credentials of the account that is executing the BCP command to connect.
What does the -U and -P switches do when used with BCP?
It allows you to use a SQL Server login, the -U specifies the login name and the -P specifies the password to use.
What is one of the biggest drawbacks of the BCP program?
That it is a command-line program.
What is the most efficient way to import well-defined data in files into SQL Server as well as export tables to a file?
The BCP or Bulk Copy Program.
What permissions does the account running BCP need if you're exporting data?
SELECT permission.
What permissions does the account running BCP need if you're importing data?
SELECT, INSERT, and ALTER TABLE permissions.
What should you use to import data into SQL Server if you need to perform transformations or do error-handling routines during the IO process?
SSIS -- SQL Server Integration Services
Why do you have to specify connection information for BCP to use?
Because it is an application, it does not have database or instance context.
Caution: Case Sensitivity
All command-line switches for BCP are case sensitive. For example you use -e to specify an error file, yet -E tells BCP to preserve identity values during an import.
T/F BCP has been designed over the years to work well for importing data that needs to perform error handling during the import process?
FALSE
Chapter 8: Lesson 1: Designing Policies
xx
Quick Check: How many conditions can be checked within a single policy?
A policy can check only a single condition.
Quick Check: How many facets can be checked within a single condition?
A condition can be defined with only one facet.
Quick Check: What are the allowed execution modes for a policy?
On demand, on schedule, on change - log only, on change - prevent.
Quick Check: What are the five objects that are used within Policy Based management?
The objects that are used with PBM are facets, conditions, policies, policy targets and policy categories.
Quick Check: which object has a property that allows you to mandate checking for all databases on an instance?
Policy category -- policy categories allow you to mandate checking of all databases within an instance.
D. Check the policies manually against the instance."
"1. Correct Answer: C
Exam tip>
Defining a condition to be used as a policy target is a critical component to well-defined policies. A policy fails during a check if the object does not conform to the criteria and if the property does not exist. For example, attempting to check that the Web Assistant is disabled against a SQL Server 2008 instance fails because the feature does not exist.
Lesson Summary:
"* You can build policies to enforce conditions across any version of SQL Server
Condition
The criteria that you want to check for a specified facet.
Facet
The .NET assemblies that define the behavior and properties available to be checked by a condition.
Policy category
A container to with one or more policies can be associated.
Policy target
The instance, database, schema, or object that you want to be checked by a policy.
Because you cannot set all policies to enforce compliance, what do you do?
You need to check policies manually that cannot be enforce on a regular basis.
How do you check or enforce policies?
To check or enforce policies, you create a subscription to one or more policies.
How do you define a Condition in PMB?
You define conditions that you want to check or enforce for a policy by defining criteria for the properties of a facet.
If you define a condition using the advanced editor -- how can you execute it?
It must be executed manually, a policy that incorporates conditions created in the advanced editor cannot be scheduled.
If you don't want to duplicate policies on every instance, create subscriptions to each instance in your environment individually or check compliance for each instance individually -- what are you options?
You can configure a Central Management Server insides of SSMS -- and use it to evaluate policies against instances or all instances in the CMS.
In designing policies for SQL Server, what are Policy Categories?
Policy Categories can be used to group one or more policies into a single compliance unit.
In SQL Server 2008, what is another phrase for Policy Based Management?
DMF -- Declarative Management Framework.
What are policies?
Policies are created for a single condition and set to either enforce or check compliance.
What are the comparison operators that can be used for properties of type Boolean?
<> or =
What are the comparison operators that can be used for properties of type string?
=, <>, LIKE, NOT LIKE, IN, or NOT IN'
What are the execution modes for policies?
(1) On-Demand (2) On change, prevent (3) On change, log only (4) On schedule
What are the foundations of policies?
Conditions are the foundation of policies.
What can you do if a condition that you want to check for a facet does not have a specific property?
you can use the advanced editor to define complex conditions that compare multiple properties and incorporate functions.
What execution modes are available for policies that contain conditions that were defined using the advanced editor?
On demand is the only execution mode available for policies containing conditions defined using the advanced editor.
What is a Condition in Policy Based Management?
In PMB, conditions are the equivalent of a WHERE clause that defines the criteria needing to be checked.
What is a Facet in SQL Server?
They are the core objects upon which your standards are built. Facets define the type of object or option to be checked, such as database, Surface Area, and login. SQL Server ships with 74 facets, each with a unique set of properties.
What is policy targeting?
When conditions are used to specify the objects to compare the condition against. You can target a policy at the server level, such as instances that are SQL Server 2005 or SQL Server 2008; you can also target a policy at the database level, such as all user databases or all system databases.
What is the "On change, log only" policy execution mode?
Checks the policy automatically when a change is made using the event notification infrastructure.
What is the "On change, prevent" policy execution mode?
Creates data definition language (DDL) triggers to prevent a change that violates the policy
What is the "On Demand" policy execution mode?
Evaluates the policy when directly executed by the user.
What is the "On schedule" policy execution mode?
Creates a SQL Server agent job to check the policy on a defined schedule.
What is the default category for policies?
Trick question; If not specified, all policies belong to the default category..
What is the maximum number of check steps that you can incorporate into a condition?
Trick question -- just 1. You cannot have a condition pull a list of objects, iterate across the list of objects, and then apply subsequent checks.
What problems does Policy Based Management or DMF (Declarative Management Framework) address?
The problem of standardizing your SQL Server instances.
What property can be set in a policy category such that when a sysadmin subscribe an instance to the policy category -- all databases that meet the target set are controlled by the policies within the policy category and cannot be overwritten?
The Mandate property.
What tool can you use to get a list of facet properties?
SSMS -- SQL Server Management Studio.
What type of data do conditions return?
Only true or false.
Where are the objects for PBM stored?
Within the msdb database.
Where can subscription occur?
Subscription can occur at either the instance or database level.
Who can subscribe a database within an instance to a policy category?
The owner of the database.
Who can subscribe an instance to a policy category?
A member of the sysadmin role.
T/F -- A policy subscription to a policy category set to Mandate cannot be overridden by a database owner.
True.
T/F -- You can check multiple criteria for a single facet property
TRUE
T/F -- you cannot check more than one property of a facet per condition?
False. You can check many properties of a facet within a single condition.
T/F -- you cannot define a single condition for multiple facets?
True. A single condition can't be defined for multiple facets. For example, you can check all 10 of the properties for the Surface Area Configuration facet in a single condition, but you have to define a second condition to check a property of the Surface Area Configuration for analysis services.
T/F -- you cannot export or import Policies and Conditions.
False. Policies and conditions can be exported to files as well as imported from files.
cc
xx
cc
xx
cc
xx
Chapter 9: Lesson 1: Backing Up a Database
xx
Quick Check: How can you detect and log corrupt pages?
Execute ALTER DATABASE <database name> SET PAGE_VERIFY CHECKSUM
Quick Check: What are the four type of backups?
You can execute full, differential, transaction log, and file/filegroup backups. A full backup is required before you can create a differential or transaction log backup.
D. Back up the transaction log with the TRUNCATE_ONLY option."
"1. Correct Answer: C
Exam tip:
You need to know how to perform each type of backup that can be executed. Backing up and restoring databases and the entire SQL Server environment is a major focus of the exam.
Lesson Summary:
"* Full backups are the starting point for every backup procedure and recovery process. A full backup contains only the pages within the database that have been used.
Best Practice: Avoiding Backup Problems
To avoid confusion, it is recommended that you use a unique naming scheme that employs a date and time in the file name so that you can tell when a backup was taken based on the name of the backup file. Because backups are taken to reduce your risk of data loss, it is also never a good idea to include multiple backups in a single file.
Best Practice: Decreasing backup times
The overhead of compression always wroth it. The amount of time saved for a compressed backup far exceeds the overhead associated with the compression operation. Fortunately, SQL Server has a configuration option called backup compression default that you can set to always have backups compressed regardless of whether you explicitly specify compression.
Note: Executing Maintenance Plans
Maintenance plans are based upon the tasks within SQL Server Integration Services (SSIS). So, when a maintenance plan executes, it first loads the SSIS engine. Then the .NET Framework interprets the tasks within the package, constructs the necessary backup statements, and executes the code generated.
Note: Identifying bad pages
It is recommended that you specify the CHECKSUM option to catch bad pages as early as possible. You do not want to encounter any surprises when you need to use the backup to restore a database.
Note: Repairing a corrupt page.
If the database is participating in a Database Mirroring session, a copy of the corrupt page is retrieved from the mirror. If the page on the mirror is intact, the corrupt page is repaired automatically with the page retrieved from the mirror.
Note: Restoring a Striped Backup
When SQL Server stripes a backup across multiple devices, all devices are required to successfully restore. SQL Server does not provide any redundancy or fault tolerance within the stripe set. A stripe set is used strictly for performance purposes.
Service Master Key
The symmetric key for the instance that defined by the service account and used as the root of the encryption hierarchy within an instance.
Before ever doing a transaction log backup -- what has to have been done first?
You have to have had a full-backup first. After the first backup -- so long as the transaction log chain is not interrupted, you can restore the database to any point in time.
Explain a good security strategy for making sure your offsite backups are secure?
Store your master keys and certificates in a different offsite storage than your encrypted backup files.
Given that completely backing up all data is a critical part of the DBA job, what is a good reason for doing partial backups?
If you have sections of your data that are READ ONLY, it doesn't make sense to include them in your regular backup routines. You'd keep the last good copy of the read-only data and only backup those files that are READ_WRITE files.
How do you ensure that your backup files are good?
You validate backups using the VERIFYONLY command.
How do you figure out which types of backups best meet your recovery needs?
You start with the recovery requirements and procedures.
How do you keep the COPY_ONLY option from affecting the differential change map, and therefore from having no impact on differential backups?
Trick question, by it's nature, the COPY_ONLY option does not affect the DCM and therefore, has no impact on the differential backups.
How do you manually reset the LSN log back to 1?
Trick Question -- you can't. The LSN starts at 1 when the database is created and increments to infinity. An LSN is never reused for a database and always increments. It provides a sequence number for every change made to a database.
How do you restore a database to a specific point in time?
By using transaction log backups. Based on the sequence number, it is possible to restore one transaction log backup after another to recover a database to any point in time by simply following the chain of transactions as identified by the LSN.
How does SQL Server accommodate changes to while ensuring that backups are consistent for restore purposes?
It locks the database, blocking all transaction; then it places a mark in transaction log before releasing the lock; it extracts all the pages that aren't empty in the data files and writes them to the backup; it then relocks the database, places a second mark in the transaction logs before re-unlocking the database; It extracts the portion of the log between the marks and appends the backup to account for the transactions that were logged during the backup.
How does SQL Server track each extent that has been changed following a full backup?
It used the DCM -- a special page in the header of a database called the Differential Change Map.
How should you grant permissions for the backups
You could grant read/write access on the backup directory directly to the SQL Server service account, but you should instead grant read/write access to the Windows group SQLServerMSSQLUser$<machine_name>$<instance_name>, which contains the SQL Server service account.
If you don't specify a directory on the target device during backup, what does SQL Server do?
SQL Server performs a backup to disk and writes the file to the default backup directory configured for the instance.
If you're doing partial backups, what backup procedure would you use?
Filegroup backups, and you'd only backup the filegroups that are READ_WRITE files.
In SQL Server, how are the permissions defined for backups?
All backups execute under the security context of the SQL Server service account.
In the event of a loss of a certificate, can you create a new one -- and are there any consequences?
Yes, you can create a new one, but you cannot access any data that was encrypted with the certificate.
More Info: Master Keys
Chapter 11 has additional information about the use and management of certificates and master keys.
Once an instance has been created, what is the command you have to run to initiate the service
Trick Question: When you create an instance, the service master key is created automatically.
Under what conditions is the service key regenerated?
Whenever you change the SQL Server service account or when you change the service account password.
What are certificates used for?
They are used to encrypt data as well as digitally sign code modules.
What are the 5 tasks that are supported by maintenance plans:
1. Backing up db's and transaction logs 2. Shrinking databases 3. Re-indexing 4. Updating of statistics 5. Performing consistency checks.
What are the four different types of backups?
(1) Full (2) Differential (3) Transaction log (4) Filegroup
What are the only operations that are not allowed during a full backup?
Adding or removing a database file; Shrinking a database
What are the parameters required for a backup?`
Just 2 -- the name of the database and the backup device.
What are the requirements you have to meet when invoking the MIRROR TO clause during backups?
(1) All backup devices must be of the same media type (2) Each mirror must have the same number of backup devices (3) WITH FORMAT must be specified in the backup command.
What are the two basic parts of the contents of a transaction log?
Active and Inactive.
What are the two types of key that a certificate has?
A public and a private key.
What can you do to get things back on tract if you have an LSN gap introduced?
You must create a full backup before you can start backing up the transaction log again.
What do Maintenance Plans do for you in SQL Server
They provide a mechanism to graphically create job workflows that support common administrative functions such as backup, re-indexing and space management.
What does the MIRROR TO clause do during backups with SQL Server?
It provides a built in capability to create up to four copies of a backup in a single operation.
What does the Service Master key do?
Is used as the root of the encryption hierarchy within an instance.
What feature was introduced in SQL Server 2005 that allows you to keep a database instance online even when SQL Server encounters a corrupted page?
SQL Server 2005 introduced the ability to quarantine corrupt pages while allowing the database to remain online.
What happens when a transaction log backup is executed?
SQL Server starts with the lowest LSN in the transaction log and starts writing each successive transaction log record into the backup. As soon as SQL Server reaches the first LSN that has not yet been committed (that is, the oldest open transaction), the transaction log backup completes. The portion of the transaction log that has been backed up is then removed, allowing the space to be reused.
What happens when differential backup is executed?
SQL Server reads the contents of the DCM -- any that have been changed from 0 to 1 since the last full backup (which is what happens when that little corner of the database is changed) are backed up.
What happens when SQL Server reads a page and the checksum doesn't match the checksum that is stored for that page?
SQL Server returns an error and logs the page into a table in the msdb database
What happens when the CHECKSUM option is specified during a BACKUP and a corrupt page is encountered?
The default behavior of SQL Server is that if an invalid checksum is encountered during a backup, the backup terminates with an error.
What happens when you specify more than one backup device?
SQL Server stripes the backup across each of the devices specified.
What happens when you use the VERIFYONLY to validate backups?
"(1) It calculates a checksum for the backup and compares to the checksum stored in the backup file
What is a differential backup?
A differential backup captures all extents that have changed since the last full back.
What is a DMK?
Just as a SMK is used as the root of the encryption hierarchy within an instance, a DMK is the root of the encryption hierarch in a database.
What is a filegroup backup?
It's a type of backup that allows you to target a portion of a database to be backed up.
What is an LSN?
It's a Log Sequence Number -- it is the unique number internally assigned to each row of the transaction logs -- specifying a single entry or single change made to the database.
What is one way to speed up backups?
Compressing the backups reduces the size of the backup by between at 4:1 and 10:1 ratio
What is required to do a transaction log backup?
The transaction logs must be intact. You cannot have gaps in the LSN sequence -- as a gap would prevent any subsequent transaction log backup from being executed.
What is the active portion of the transaction log?
It contains all the changes that have not yet been committed.
What is the command that tells SQL Server to detect and quarantine corrupted pages?
ALTER DATABASE <dbname> SET PAGE_VERIFY CHECKSUM
What is the default way that the DMK is encrypted?
By default it is encrypted with the SMK.
What is the difference between a differential backup and a transaction log backup?
A transaction log backup is an incremental backup because it captures any changes that have occurred since the last transaction log backup. A differential backup contains all pages changed since the last full backup. If you did a full backup at midnight and a differential backup every four hours -- at 4 am and 8 am -- then the backup would contain all changes made to the database since midnight -- not just since the last backup at 4 am.
What is the first action you should take after an instance is started?
To backup the service master key.
What is the first thing you should do after the creation of keys for a certificate?
Back them up.
What is the inactive portion of the transaction log?
It contains all the changes that have been committed.
What is the key created for each instance of SQL Server -- that has to be backed up to ensure a complete recovery of your instance?
Service master key.
What is the limiting backup for backup performance?
The speed of the backup device.
What is the mechanism in SQL Server to protect your database from massive corruption?
You are limited to 1,000 corrupt pages in a database. When you reach the corrupt page limit, SQL Server takes the database off-line and places it in a suspect state to protect it from further damage.
What is the primary purpose of the differential backup?
To reduce the number of transaction log backups that need to be restored.
What is the tradeoff's in do a compressed vs. uncompressed backup/
The compressed backup will give you a smaller file to write, but it will be tremendously taxing on the processor.
What is typically the limiting factor for the speed of a backup?
The performance of the device where the backup is being written.
What must be first done before doing differential backups?
You must first have a full back -- as a differential backup has to be applied to a full backup and can't exist until a full backup has been created.
What must exist before you can use a differential or transaction log backup?
A full backup. It is the basis for recovering a database.
What must you first do the DMK before you can back it up?
It must be open before you can back it up.
What option during the backup would you use if you want a backup, but you don't want to affect the starting point for recovery (say the time of the last full backup)?
You'd use the COPY_ONLY option. Sometimes used when you want a copy of the database for a development or test environment -- but you're not backing up the databases in a way that you want to affect the recovery chains.
What option of the BACKUP command gives you the ability to control whether an existing backup is overwritten or append to?
The INIT/NOINIT options. When you specify NOINIT and you are backing up to a file that already exists, SQL Server appends the new backup to the end of the file. If you specify INIT and the file already exists, SQL Server overwrites the file with the contents of the new backup.
What other roles in SQL Server can perform backups?
A member of sysadmin server role can back up any database in an instance and members of the db_owner database role can back up their databases. You can also add a user to the db_backupoperator fixed database role to allow the user to back up a database while preventing any other access to the database.
What type of value is the LSN?
It is an integer value.
What would you ever execute a filegroup backup?
If you need to reduce the footprint of a backup - as a full-backup can consume a significant amount of space and time.
When else should you backup the service master key?
Immediately following a change to the service account or service account password.
When you do a full backup, what happens to the DCM's?
A full backup zeroes out the contents of the DCM.
Where should you always store your backups?
Offsite
Why is it important to have a backup of the DMK?
Because you need it to access certificates, asymmetric keys and symmetric keys with in a database.
Caution: Backing up individual files
Although it is possible to backup a file, it is recommended that your backups are only as granular as the filegroup level. A filegroup is a storage boundary, and when you have multiple files within a filegroup, SQL Server stores data across all the files. However, with respect to the table, index, or partition, the distribution of data across the files is essentially random. Therefore, to recover a database, you need all the files underneath a filegroup to be in exactly the same state.
T/F -- a database has to be locked during the backup process and applications have to be locked out of databases until the backup is completely, to ensure that the underlying data doesn't change during the backup itself.
False. A backup can be executed concurrently with other database operations.
T/F -- A single backup device can contain only one good backup at a time?
False. A single backup device can contain multiple backups.
T/F -- you can specify up to 8 backup devices during a backup with SQL Server?
False, you can specify up to 64 backup devices.
Chapter 9: Lesson 2: Restoring Databases
xx
Quick Check: What is the first operation that should be performed for any restore operation?
Back up the tail of the log.
Quick Check: Which recovery model always allows you to restore to the point of fail so long as you can back up the tail of the log?
The Full recovery model.
1. The server that Customers database is running on fails and needs to be replaced. You build a new server and install SQL Server 2008. When you built the new server, you decided that instead of configuring the new server exactly like the old one, you implement a new drive letter and folder structure for data and log files. Which option do you need to use when you restore the Customers database to the new server?
"A. Incorrect: While the NORECOVERY option allows you to restore subsequent differential and transaction log backups, the restore fails because SQL Server does not have a valid directory structure to restore the database.
Lesson Summary:
"* The first step in any restore procedure is to back up the tail of the log.
Tip: Restoring an existing database
The creation and sizing of all files associated to a database can consume a significant amount of time. If the database already exists, you should just restore over the top of the existing database.
How does page contents become inconsistent?
When the page checksum does not make the contents of the page or a row is only partially written to the page. Page corruption usually occurs where a disk controller begins to fail.
If page corruption restores in an Index, how can you restore the index from backup without executing a full restore?
Trick Question: If an index is corrupt, you don't restore it, you drop and recreate the index to restore the corruption.
If the original database file is gone, how can you get issue the final transaction log backup so you can start a restore.
Trick question. You can't issue the final transaction log backup without the original database.
If you encounter corruption in a page of data within a table or primary key and are forced to do a restore to fix the corruption issue -- what can you do short of a full restore?
You can do a Page RESTORE.
If you have one insert statement that affect 4 databases at the same time, which of the databases is the logging database for the transaction?
Trick question, transaction logs are appended for every database that is affected. Transaction logs are database specific.
If you know that you're doing to have to apply to transaction log backups or differential backups to a database being restored, what option to you use during the RESTORE of the database?
You need to use the NORECOVERY option during RESTORE.
If you need to have a database available after RESTORE, what option to you need to specify?
RECOVERY
If you want the RESTORE of a database to not be in RECOVERY mode, but you still want to be able to issue select statements, what option do you need to use during RESTORE?
You need to use the STANDBY option. With you, you can issue SELECT statements against the database will still issuing additional differential and/or transaction long restores.
If you want to do a Point-In-Time restore based on the LSN or transaction log MARK, what RESTORE options do you use?
The STOPATMARK and STOPBEFOREMARK
If you're restoring a database over the top of your only copy -- what option should you always run?
With the RESTORE option you should always use CONTINUE_AFTER_ERROR otherwise, if there's a problem with the recovery media, it errors out and stops immediately and you've just corrupted your only good copy.
If your files systems on the machine you're restoring don't match, what command might you want to use during RESTORE?
You might want to use the MOVE option. The MOVE option provides the ability to change the location of one or more data files when the database is restored.
To provide the greatest flexibility for a restore, what is the first step.
To issue a transaction log backup against the original database.
What are the 5 requirements for a page-level restore?
1. The database must be in either the Full or Bulked-logged recovery model. 2. You must be able to create a transaction log backup. 3. A page restore can apply only to a read/write filegroup 4. You must have a valid full, file, or filegroup backup available 5. The page restore cannot be executed at the same time as any other restore operation.
What are the two options for the beginning of a restore sequence?
Full backup or filegroup backup.
What DB role can perform SELECT operations against a database that is being RESTOREd in the EMERGENCY state?
those that are member of the db_owner role.
What is the 3rd step of restoring a corrupt page?
Restore any differential backups with NORECOVERY option.
What is the 4th step of restoring a corrupt page?
Restore any additional transaction log backups with NORECOVERY option.
What is the 5th step of restoring a corrupt page?
Create a transaction log backup
What is the 6th step of restoring a corrupt page?
Restore the transaction log backup from step #5 using the WITH RECOVERY option.
What is the first step of restoring a corrupt page?
1. Retrieve the PageID of the damaged page.
What is the given to the type of restore where you only want to restore/replay a portion of the transaction log?
This is referred to as a point-in-time restore.
What is the name given to the step in the restore process where you first take a final transaction log backup?
It's referred to as backing up the tail of the log.
What is the RESTORE option that allows you to specify a date and time for which SQL Server restores?
The STOPAT command.
What is the RESTORE option that is used to force the restore over the top of an existing database?
REPLACE
What is the second step of restoring a corrupt page?
"2. Using the most recent full, file, or filegroup backup -- execute the following command:
What type of database RESTORE is necessary to allow for an online restore?
You can only do an online restore of a database that is a filegroup restore.
What type of DB RESTORE option can be done that allows databases to stay up and accept transaction and other operations?
The ONLINE option.
What type of DB RESTORE option can be done that allows only SELECT operations to be done with those in a specify DB Role?
the EMERGENCY state.
When does Page Corruption occur?
When the contents of a page are not consistent.
Which is quicker, backing up or restoring?
Backing up. Because you backup more than you restore, the Backup/Restore process is optimize for Backing up. Restoring takes approximately 30% longer.
Chapter 9: Lesson 3: Database Snapshots
xx
Quick Check: Prior to reverting a database using a Database Snapshot, what must you do?
You must drop all Database Snapshots except the Database Snapshot being used as the source for the restore command.
Quick Check: Which two features are incompatible with Database Snapshots?
FILESTREAM and full-text indexes
D. Distribution"
"1. Correct Answers: B and C
Exam Tip:
You need to know that FILESTREAM is not compatible with Database Snapshots. Although you can create a DS against a database enable for FILESTREAM, you cannot use the Database Snapshot as a source for a RESTORE DATABASE operation.
Lesson Summary:
"* A Database Snapshot is a point-in-time, read-only, copy of a database.
Database Revert
the restoration of a database from a database snapshot.
Differential Backup
Backup that contains all of the extents within a database that have changed since the last full backup.
Full backup
A backup that captures all of the pages within a database that contain data.
Log Sequence Number (LSN)
The primary key within the transaction log. The LSN starts at 0 when the database is created, always increments, never repeats, and can never be reset to a previous value.
Online Restore
A filegroup restore that is performed in SQL Server 2008 Enterprise that occurs while the rest of the database remains online and accessible.
Page Corruption
A scenario whereby the contents of a data page are inconsistent or have become corrupted, usually due to the failure of a Redundant Array of Inexpensive Disks (RAID) controller or Host Bus Adapter (HBA).
Partial Backup
A backup operation that backs up only the filegroups that can be written to.
Partial Restore
A filegroup restore that affects only a portion of a database.
Tail Backup
The first step in every restore operation. A tail backup executes a final transaction log backup to save all remaining transactions in the log prior to beginning a restore sequence.
Transaction Log Backup
An incremental backup for a database. Contains all of the transactions that have been committed since the last transaction log backup.
Transaction Log Chain
A continuous set of transaction log backups that can be applied to a full backup and does not provide any gaps in the transaction log sequence.
How does copy-on-write work?
The first time a modification to a data page within an extent occurs, SQL Server copies the before image of the extent to the Database Snapshot. When SELECT statements are issues against the DS, SQL retrieves data from the DS for any data that has changed while still pulling data from the source database for any extents that have not changed.
How many pages are then in an extent?
8 pages
What are the most common filegroup restrictions for Database Snapshots?
You can't reference filegroups that are offline, defunct or restoring.
What are they two requirements for creating a Database Snapshot?
"(1) You must include an entry for each data file specified in the source database.
What clause do you add to the end of the CREATE DATABASE command to create a snapshot?
AS SNAPSHOT OF
What do you have to do for FILESTREAM data during the revert process?
The source database cannot be enabled for FILESTREAM
What happens to the Data in a Snapshot when the source database is gradually modified?
Nothing. The Database Snapshot retains the state of the data in the source database the instant the Database Snapshot was created.
What happens to the transaction log during a revert?
Because the transaction log is rebuilt, the transaction log chain is broken.
What is a Database Revert?
It returns the source database to the state it was in when the Database Snapshot was created -- for example, if you needed to discard every change that happened within the database since the Database Snapshot was created.
what is sparse files?
It is an entry in the File Allocation Table. As data is added to the file, the file automatically grows on disk. By using sparse files, the creation time for a Database Snapshot is independent of the size of the source database.
What is the Maximum Size of a Snapshot?
The size of the source database at the time the snapshot was taken -- since if every single piece of data changed, you'd have made a copy of every single original file.
What is the name of a special category of restoring data that can only be performed if you have a Database Snapshot?
Database Revert
What is the option that must be specified to keep a database online during a restore?
Trick question: Both the source and the snapshot are off-line during the revert process.
What mechanism does SQL Server use to maintain any changes that occur within the source database?
Copy-On-Write
What must you do to Full-text catalogs after a revert is performed?
They must be dropped and recreated.
What operating system feature does SQL Server take advantage of when a Database Snapshot is created?
sparse files
What restrictions does having a Database Snapshot impose on the original database?
You can't drop, restore, or detach a source database that has a Database Snapshot created against it.
What versions of SQL Server 2008 support Database Snapshots?
Only Enterprise
When backing up a Database Snapshot, what option allows you to do a full backup?
Trick Question: You can't backup a Database Snapshot
When reverting a source database with more than one snapshot, what are the restrictions?
Trick question: You can only revert source databases that have just a single Database snapshot.
Which Databases cannot have a snapshot created again them?
System Databases
Which indexes are not supported in Database Snapshots?
Full-text indexes are not supported.
Why don't you specify a transaction log for Database snapshot?
Because they're read-only, you can't write to them and therefore, do not need a transaction log.
Caution: Managing FILESTREAM Data
Database snapshot is not compatible with FILESTREAM. If you create a Database Snapshot against a database with FILESTREAM data, the FILESTREAM filegroup is disabled and not accessible.
T/F: A Database Snapshot must exist on the same machine as the original database?
True, because it has to exist on the same instance, and therefore on the same machine. Cannot be on the same machine, but in a different instance.
Chapter 10: Lesson 1: Creating Jobs
xx
Quickcheck: If a job fails, where can you look to diagnose the problem?
The first place to look is in the job history, which can be accessed from SQL Server management Studio (SSMS) by right-clicking a job and selecting View History. You can also look in the logging files that are configured for each job step. In some cases, you might find additional information in the Microsoft Windows event logs.
Quickcheck: What types of job steps can be executed?
You can create jobs that execute T-SQL, ActiveX Scripts, OS commands, or executive feels. You can also configure specific tasks for replication, Analysis Services, and Integration Services.
D. The SQL Server error log"
"1. Correct Answer: B
Exam Tip:
You need to know where to find information to diagnose the cause of an error in a job step.
Lesson Summary: Creating Jobs
"* An Operator is an alias for a person, group, or device to which you want to be the target of notifications.
Best Practice: Logging Steps
Every job step that you create should be configured to log to a file. The most common way to configure logging is to create a new log file in the first step of a job, and then each subsequent job step appends information to the log file.
Note: Understanding the Standard Workweek
The start and end time for a workday is based on the U.S. Standard workweek of Monday-Friday and does not accommodate any other workweek definition.
How can you view the histories of jobs that have been executed?
My looking at the log files that were specified in the log steps or by looking at the dbo.sysjobhistory table in the msdb database.
If you want to automate a job you've created in SQ; Server, what do you do?
You add the job to a Job Schedule.
In addition to commands to execute -- what are some other things that Job Steps can be configured with?
Logging; Notification to an operator; Retry settings that specify the number of times to retry a step as well as the number of minutes between retries; Control Flow Logic;
In the context of automating SQL Server, what is an operator?
It is an alias for a person, group or device. Operators are used to send notifications when jobs fail or an alert is generated. For each operator, you specify a name along with contact information such as e-mail address, pager number or NET SEND address.
What are control flow options?
They allow you to specify an action based on either success or failure.
What are Job Steps?
They are the executive elements within a job in SQL Server.
What are some example of control flow options/
Quit job reporting success; Quit job reporting failure; Go to next step; Go to a specific step number
What are some examples of executable Job Steps
T-SQL; Replication Tasks; OS tasks or executable files; Analysis Services Tasks; Integration Services Packages; Active X Scripts
What are three different approaches to recording logs?
Overwriting and Appending a log file. You can also log step output to a table, but this is not recommended due to the extra overhead of logging to a table versus logging to a text file.
What if you don't want the security to be the owner of the job?
You can override the security context by specifying a proxy account that the SQL Server Agent uses for the job step based on credentials assigned to the proxy account.
What is the default security context for a job step?
It is the default security context that corresponds to the login that is set as the owner of the job.
Where do you create a job schedule?
You can create a Job Schedule through the Manage Schedule dialog box or during the creation of a job.
Chapter 10: Lesson 2: Creating Alerts
xx
Quick Check: What are the three types of alerts that can be created?
You can create alerts on performance counters, SQL Server Errors and WMI queries.
Quick Check: What are the two response actions that can be configured for an alert?
You can have an alert send a notification or execute a job in response to the alert condition.
D. Network Monitor"
"1. Correct Answer: A
Exam Tip:
You need to know the types of alerts that can e defined in SQL Server and the response criteria that can be specified for an alert.
Lesson Summary: Creating Alerts
Alerts enable you to notify operators as well as execute jobs to fix problems when an event occurs.
Alert
An object that can be created to respond to a change in the system state
Job Step
An individual unit of work within a job.
Operator
An Alias for a person, group or device that you want to send a notification to.
In SQL Server, what is an Alert?
it is the ability to send a notification or perform an action based upon events or conditions that occur either within the SQL Server instance or on the machine hosting your instance.
What are the three types of alerts?
"1. SQL Server Event
What are the two types of responses that can be configured to an alert?
"1. Execute Job
What is a Performance Condition Alert?
They are alerts that are defined against System Monitor counters. I.E. It sends out an alert when the amount of free disk space falls below 15 percent.
what is a SQL Server Event alert?
It is an alert based on either an error number or an error severity level. When a SQL Server event alert is created, the SQL Server Agent scans the Windows Application event log to look for matches to the event criteria that you have defined. IE; you could fire an alert on an error severity of 22 to notify an operator that a table is suspect.
What is a Windows Management Instrumentation alert?
An alert for a WMI event allows you to send alerts based on events that occur on the server hosting your SQL Server instance. Anytime an event occurs on the machine (IE, a network card is disconnected, a file is created or deleted, or the registry is written to) a WMI event is raised within Windows. A WMI alert sets up a listener to the WMI infrastructure to fire the alert when the Windows event occurs.
Chapter 11: Lesson 1: TCP Endpoints
xx
Quick Check: What are the three states of an endpoint, and what is the difference between each state?
The three states are STARTED, STOPPED, AND DISABLED. An endpoint that is STARTED listens for and allows connections. An endpoint that is STOPPED listens for connection request and returns an error message. An endpoint that is DISABLED does not respond to any request.
Quick Check: What are the two parts of an endpoint?
Payload and Transport. An endpoint ahs transport defined as either TCP or HTTP and ahs a payload defined as TSQL, SERVICE_BROKER, DATABASE_MIRRORING, or SOAP.
Quick Check: What authority must be granted before an endpoint allows a connection request?
To allow a connection request, the login that is being used must have been granted the CONNECT permission on the endpoint.
D. Users do not have CONNECT permissions on the endpoint."
"1. Correct Answer: D
D. The encryption is set to AES on each endpoint"
"Correct Answer: C
Exam Tip:
TSQL endpoints do not have any additional configuration options beyond the universal TCP settings.
Exam tip:
You can specify only one TCP endpoint with a payload of DATABASE_MIRRORING for each instance of SQL Server.
Lesson Summary: TCP Endpoints
"* Endpoints in SQL Server act very similar to firewalls by filtering out any traffic that does not meet allowed formats.
Best Practices: Authentication
When all Database Mirroring and Service Broker instances reside within a single domain or across trusted domains, you should use Windows authentication. When instances span non-trusted domains, you should use certificate-based authentication.
Best Practices: Encryption
Use RC4 for minimal encryption strength and best performance. Use AES if you require strong encryption, but not that this algorithm requires more calculation overhead and will affect performance.
Best Practices: Port numbers
Because port 5022 is the default TCP port number for a DATABASE_MIRRORING endpoint, the 1433 is the default TCP port for TSQL endpoint, you might want to specify a different port number. Not using the default port number helps to foil potential hackers -- or at least makes their job more difficult -- by requiring them to use a port scanner instead of just blindly connecting to port 1433 or 5022 for a denial of server (DoS) attack or other hacking attack.
Note: Encryption
The communication encryption for endpoints is coded to understand the source and destination of the traffic. If the communication occurs entirely within the SQL Server instance, the traffic is not encrypted because it would introduce unnecessary overhead in the communications. This is especially important with Service Broker, in which many messages are exchanged between queues within a single instance. Traffic is encrypted only when data will be transmitted outside the SQL Server instance.
Note: Endpoints on Express Edition
If you are creating a Database Mirroring endpoint on SQL Server 2008 express, it supports on the role of WITNESS
Besides the mandatory LISTENER_PORT -- what other options do you have for the Endpoint Access Security?
The LISTENER_IP argument is an option argument that can provide a very powerful security layer for some types of applications.
For endpoints security, what are the two authentication methods you can use?
Windows-based authentication or certificates.
How can an endpoint specified as a WITNESS participate?
As a witness.
How can an endpoint specified as ALL participate?
In any role.
How can an endpoint specified as PARTNER participate?
As a principal or as the mirror.
If you wanted to create strong algorithms for encryption -- what can you do?
You can use AES -- Advanced Encryption Services algorithm.
In the context of Endpoint Access -- describe the Endpoint state of DISABLED
The endpoint does not listen and does not respond to any connection that is attempted.
In the context of Endpoint Access -- describe the Endpoint state of STARTED
The endpoint is actively listening for connections and will reply to an application.
In the context of Endpoint Access -- describe the Endpoint state of STOPPED
The endpoint is actively listening but returns a connection error to an application.
In the context of Endpoint Access -- what are the two layers or security?
endpoint state and permission.
What are the Database Mirroring and Service Broker Endpoints?
They are options to specify the authentication method and the encryption settings.
What are the different types of payloads?
SOAP, TSQL, SERVICE_BROKER, and DATABASE_MIRRORING
What are the three roles for an endpoint in Database Mirroring endpoints?
PARTNER, WITNESS or ALL
What are the two arguments you can specify that are universal for all TCP endpoints?
LISTENER_PORT and LISTENER_IP
What are the two basic parts of an Endpoint?
The transport and the payload.
What are the two different types of transports?
TCP and HTTP.
What are the two universal arguments for TCP endpoints?
You are required to specify a port for the endpoint to listen on. If you want, you can configure an IP address that restricts the endpoint to respond to traffic coming only from the specified IP address.
What are the valid Transports (either TCP or HTTP) for TSQL, SERVICE_BROKER, DATABASE_MIRRORING and SOAP
TCP for TSQL, SERVICE_BROKER and DATABASE_MIRRORING; HTTP for SOAP
What does TCP stand for?
Transmission Control Protocol.
What is an Endpoint in SQL Server?
Endpoints control the capability to connect to an instance of SQL Server as well as to dictate the communications methods that are acceptable. Acting very similarly to firewalls on the network, endpoints are a layer of security at the border between applications and your SQL Server instance.
What is Endpoint Permission?
It is the second level of Endpoint Access security; It is the permission to connect to an endpoint. An application must have a login created in SQL Server that has the CONNECT permission granted on the endpoint before the connection is allowed through the endpoint.
What is Endpoint State?
It is the first level Endpoint Access security; An endpoint can have one of 3 states -- STARTED, STOPPED and DISABLED.
What is the benefit of restricting payloads to a valid method of transport?
It allows you to filter traffic without needing to authenticate the request. Endpoints operate like firewalls by rejecting requests that are not properly formatted based on the endpoint definition.
What is the default encryption algorithm between endpoints?
RC4 is the default algorithm.
What is the LISTENER_IP?
The LISTENER_IP argument is where you specify an IP address for the endpoint to listen on. The default (and less secure setting) is set to ALL -- which means that the endpoint listens for connections sent to any valid IP address configured on the machine. However, the LISTENER_IP argument allows you to limit connection requests to a specific NIC. Requests to other NIC's on the machine will be ignored.
What is the port that the TCP for TSQL endpoint is already configured to during installation?
TCP Port 1433 or the alternate port number for the instance.
What types of authentication are available for Service Broker and Database mirroring endpoints?
NTML and Kerberos authentications can be specified. You can also specify an option of NEGOTIATE, which causes the specific authentication method to be negotiated between he application and the endpoint.
Which of the two arguments is required?
LISTENER_PORT
Which of the valid Payload Types allows for message forwarding?
SERVICE_BROKER endpoints implement arguments related to message forwarding.
Chapter 11: Lesson 2: Configuring the SQL Server Surface Area
xx
Quick Check: How do you configure an instance so that only local connections are allowed?
The TCP/IP provider enables connections to be created to the instance remotely. By disabling the TCP/IP provider, you can create only local connections to the instance.
Quick Check: What do you use to enable or disable features for an instance?
The sp_configure system stored procedure is used to enable or disable features.
D. SQL Server Installation Center"
"Correct Answer: B
Exam Tip:
SQL Server 2005 provided a utility called the Surface Area Configuration Manager, which does not exist in SQL Server 2008. The functionality that was provided by Surface Area Configuration for Connections is now accomplished using the SQL Server Configuration Manager. The functionality provided by the Surface Area Configuration for Features did not change; the GUI interface to sp_configure was just removed.
Lesson Summary: Configuring the SQL Server Surface Area
"* The first surface area configuration decision that you make occurs during installation, when you decide whether to force all login access to the instance to use Windows-only credentials
How can you avoid exposing yourself to attack with OPENROWSET and OPENDATASOURCE calls?
If you need the ability to execute queries across instances, you should be using linked servers which allow Windows credentials to be passed between machines.
How do OPENROWSET and OPENDATASOURCE expose you to attack?
By allowing applications to embed security credentials into code that spawns a connection to another instance from within SQL Server.
In order to connect to a SQL Server instance remotely, what must you do?
You must enable the TCP/IP network provider.
What are the two features with the great risk for exposing an external interface or ad hoc execution capability?
OPENROWSET/OPENDATASOURCE and OLE Automation procedures.
What is the biggest potential risk to an instance?
It's the use of features that expose an external interface or ad hoc execution capability.
Chapter 11: Lesson 3: Creating Principals
xx
Quick Check: What database principal was created as a replacement for an application role?
loginless users are the replacement for an application role.
Quick Check: Which logins cannot be used to authenticate to an instance?
You cannot use logins that are mapped to a certificate or asymmetric key to authenticate to an instance
D. Add the logins for application owners to the sysadmin role."
"Correct answers: A and C
D. Add Tina to the sysadmin role."
"Correct Answer: C
Exam Tip:
You can create SQL Server logins mapped to certificates or asymmetric keys. However, a login mapped to a certificate or asymmetric key does not provide a means to authenticate to an instance. Logins mapped to certificates and asymmetric keys are used internally as a security container.
Exam Tip:
Loginless users are designed to replace application roles. Loginless users provide a much better audit trail than an application role because each user must authenticate to the instance using their own credentials instead of using a generic account.
Lesson Summary: Creating Principals
"* You can create SQL Server native logins or map Windows accounts to a SQL Server login
db_accessadmin
Add or remove users in the database
db_backupoperator
Backup the database but cannot restore a database or view any information in the database
db_datareader
Issue SELECT against all tables, views, and functions within the database
db_datawriter
Issue INSERT, UPDATE, DELETE and MERGE against all tables within a database. Members of this role must also be members of the db_datareader role
db_ddladmin
Execute data definition language statements (DDL)
db_denydatareader
Prevent SELECT against all tables, views, and functions within the database
DB_denydatawriter
Prevent INSERT, UPDATE, DELETE and MERGE against all tables within the database.
db_owner
Owner of a database that has full control over the database and all objects contained within the database.
db_securityadmin
Manage the membership of roles and associated permissions, but cannot manage membership for the db_owner role
How do you create additional instance level roles?
Trick Question: You can't create additional roles at an instance level.
How do you defeat against brute force attacks on accounts that are the Administrator's or the sa's account?
By setting up the security profiles to disable the accounts are a certain number of failed login attempts.
How do you defeat against brute force attacks on the Administrators and sa accounts?
By renaming the accounts.
If you want to temporarily stop users from being able to access a SQL box -- such as during an upgrade -- but you don't want to revoke all their permissions (causing you to have to reestablish the permissions later), what can you do?
Disable the logins. ALTER LOGIN <loginname> DISABLE
public
Default group in every database that all users belong to
What are principals?
Principals are the means by which you authenticate and are identified within an instance or database.
what are the server roles that ship with SQL Server?
bulkadmin; dbcreator; diskadmin; processadmin; securityadmin; serveradmin; setupadmin; sysadmin
What are the two major categories of Principles?
Logins/users and groups. They exist at both the instance and database level.
What do members of the buikadmin role do?
Administer BCP and Bulk Insert operations
What do members of the dbcreator role do?
Create databases
What do members of the diskadmin role do?
Manage disk resources
What do members of the processadmin role do?
Manage connections and start or pause an instance
What do members of the securityadmin role do?
Create, Alter, and Drop logins, but can't change passwords
What do members of the serveradmin role do?
Perform the same actions as diskadmin and processadmin, plus manage endpoints, change instance setting, and shut down the instance.
What do members of the setupadmin role do?
Manage linked servers
What do members of the sysadmin role do?
Perform any action within an instance. Members cannot be prevented from accessing any object or performing any action.
What is a loginless user?
A user in the database that is not associated to a login.
What is another way of referring to instance-level roles?
Fixed Server roles
What is the fundamental principle of SQL Server Security?
"No access by default" If you haven't been explicitly been granted permission, you cannot perform an action. You grant access to a database by adding a login to the database as a user by executing the CREATE USER command.
What is the SQL Security equivalent of Groups in Windows Security
Roles
What two accounts on a Windows / SQL box cannot be locked out?
On Windows box, the Administrators Account and on the SQL Server, the sa account.
What was introduced in 2008 to replace Application Roles in 2005 and why?
loginless users -- because application roles made it difficult to do auditing -- when they connected, it was the application that could be seen to be doing things, and not the user that was logged into the application.
Why are Fixed Server Roles given their name?
Because you can't modify the permission on the role.
Chapter 11: Lesson 4: Managing Permissions
xx
Quick Check: How are principals, securables and permissions related?
You can GRANT, REVOKE, or DENY permissions ON a securable TO a principal
Quick Check: what is an ownership chain, and how can you have a broken ownership chain?
An ownership chain applies to objects that reference other objects within a database. The owner of the schema that contains the object is considered the owner of the object. SQL Server checks permissions for the first object that you access, as well as each time the owner changes within the calling stack. The chain of object owners within a calling stack is called an ownership chain. You have a broken ownership chain when the object owner changes within a calling stack and you have not been granted sufficient permissions to continue accessing objects within the call stack.
D. Grant the users SELECT permission on every schema in the database."
"Correct answer C:
D. REVOKE VIEW DEFINITION"
"Correct answer A:
Exam Tip:
To create a schema owned by another database principal, the user creating the schema must have IMPERSONATE permission on the principal being designated as the schema owner.
Lesson Summary: Chapter 11: Lesson 4: Managing Permissions
"* You GRANT permissions ON a securable TO a principal
Best Practices:
It is interesting to note that many developers seem to argue about calling stored procedures in their applications. Instead they want to embed SQL directly into the application. But none of the developers you work with would ever think of writing an application as just a bunch of embedded code. Rather developers spent large amount of time constructing objects that have interfaces and building applications by connecting objects via their interfaces. This development style allows multiple developers to work on a complex application, even when dependent code has not been completed. Stored procedures perform the same function as the APIs that developers use within every application. A stored procedure is nothing more than an API to the database, which means that developers do not even need to know the structure of the database.
Can securables be nested?
yes -- For example, an instance contains databases; databases contain schemas; and schemas contain tables, views, procedures, functions, and so on.
How are securables relevant to managing permissions in SQL Server?
Securables are the objects on which you grant permissions.
How can stored procedures become your most powerful security mechanism within your database?
By using ownership chains.
how do you add permissions to an object?
With the GRANT statement.
How do you prevent access to an object?
With the DENY statement. DENY statements override any other permissions listed in the security table -- including the GRANT statement.
How do you remove permission entries for objects?
Using the REVOKE statement.
How does this work?
Applications can be built to call stored procedures, which never granted direct access to the underlying tables; therefore the only actions that can be performed are the actions allowed by the stored procedures.
In SQL Server, what objects are securables?
Every object within SQL Server -- including the entire instance is a securable.
In the management of permission, what is impersonation/
you can impersonate another principal to execute a command in that users specific user context. To Impersonate, you must have the IMPERSONATE permission granted to your account on the principal that you want to impersonate.
What are Certificates?
Certificates are keys based on the X.509 standard that are used to authenticate the credentials of the entity supplying the certificate.
What are private certificates?
They are generated by and used to protect data within an organization.
What are public certificates?
They are essentially a file that is supplied by a certificate authority that validates the entity using the certificate.
What are signatures?
They allow you to elevate a user's permission but to provide a restriction such that the elevation occurs only when the user is executing a specific piece of code.
What are the Administrative accounts in SQL Server?
Members of the sysadmin role, members of the db_owner role, and the sa account.
What are the most useful places to employ a signature?
To bridge a the gap in a broken ownership chain.
What are the two types of Certificates?
Public and Private
What do permissions do?
They provide the authority for principals to perform actions within an instance or database.
what is a database master key used for?
The database master key is used to protect any certificates, symmetric keys or asymmetric keys that are stored within a database. The database master key is encrypted using Triple DES and User-supplied password. A copy of the database master key is also encrypted using the service master key such that automatic decryption can be accomplished within the instance.
What is a service master key.
each instance of SQL Server has a service master key that is generated automatically the firs time the instance is started. SMK's are symmetric keys generated from the local machine key and encrypted using the SQL Service account by the Windows Data Protection API. By default The service master key is used to encrypt any database master key that is created within the instance.
what is the first layer of security within a database that you should plan for and take advantage of?
Schema
What is the term given to permissions on the objects within an instance or database?
Securables.
What other roles are members of the sysadmin role automatically added to?
They're automatically members of the db_owner role for every database within the instance.
What provides the basis for the encryption hierarchy in SQL Server?
Master Keys
What provides the containers that own all objects within a database?
Schemas
Why would SQL Server introduce the concept of a schema?
It allows you to drop a user from the database without affecting the names of objects or applications that use the object. Schemas are the only objects that are directly owned by the object - so to drop a user that owns a schema -- you must first change the ownership of the schema to another user.
Important: Master Keys
A database master key is required to create a certificate that is the basis of a signature.
Important: Object Owner
Although schemas contain all objects within a database, SQL Server considers the owner of the schema to be the owner of every object within the schema when determining ownership claims.
Important: Permission Scope
You can assign permissions to any securable. By using higher-level containers such as databases and schemas, you can assign permissions very flexibly. Although you can assign all permissions directly to the lowest-level objects, if a user needs the same permission to access all objects in a schema or database, you can replace dozens or even thousands of separate permissions by granting the permission on the schema or database instead.
Chapter 11: Lesson 5: Auditing SQL Server Instances
xx
Quick Check: Which object can be used to audit as well as prevent most object changes?
DDL Triggers can audit any DDL command. If the DDL command executes within a transaction, a DDL trigger can be used to rollback the ddl and prevent the change from occurring.
Quick Check: Which object is required before you can create a server or database audit specification?
You must create a server audit object before a server or database audit specification can be created.
D. Server audit specification."
"Correct Answer: C
D. Server audit specification"
"A. Correct: You can create a database audit specification to log any SELECT, INSERT, UPDATE, DELETE, BCP or BULK INSERT statements executed against the employee pay records table for just the group of users who have access.
Exam Tip
An important feature of DDL triggers is the ability to roll back an action. The Policy-Based Management Framework creates DDL triggers for all policies that you configure to prevent an out-of-compliance situation.
Lesson Summary:
"* DDL Triggers can be created to fire when specific DDL events or events within a group are executed
Chapter 11: Lesson 6: Encrypting Data
xx
Quick Check: What do you need to do to a hash algorithm to increase the complexity when the range of possible encryption value is so small?
If the range of possible values to encrypt is small, you need to salt the hash value in order to default brute force attacks.
Quick Check: What object is required to implement TDE?
You must create a certificate in the master database that is used to encrypt the database encryption key.
Create an asymmetric key in the database that is use to encrypt the data."
"1. Correct Answer: B
D. Create an asymmetric key in the database that is used to encrypt the data."
"1. Correct Answer: B
Exam Tip:
You must back up the certificate used for Tde and store the backup in a safe location. After you encrypt it, you cannot access your data without the certificate.
Lesson summary: Chapter 11; Lesson 6: Encrypting Data
"* Data can be encrypted within tables using a hash algorithm, a passphrase, a symmetric key, an asymmetric key or a certificate
Asymmetric Key
A key that conforms to the X.509 standard which can be used to encrypt data within a column
Certificate
A key that conforms to the X.509 standard used to encrypt data in a column as well as a database encryption key for transparent data encryption. Certificates can be backed up and restored, making them more portable than an asymmetric key.
Database audit specification
Audit definition that applies at a database scope.
Database Master Key
Master key for the database that forms the root of the database encryption hierarchy.
DDL Trigger
Code that executes when a DDL statement, such as CREATE, DROP or ALTER is executed.
Fixed Database Role
Role within a database included with SQL Server that you cannot change the security definition of.
Fixed Server Role
Role at an instance level that is included with SQL Server that you cannot change the security definition of.
Hash Algorithm
A one-way algorithm that encrypts data, but does not allow for decryption.
Impersonation
The act of assuming the security credentials of another instance or database principal. Logins/Database users that are mapped to a certificate or asymmetric key cannot be impersonated.
Loginless user
A database user that is not mapped to a login.
Ownership chain
The list of owners within a module execution claim.
Principal
1. An object that can own other objects within an instance. The SQL Server object that defines a security access path for a user. (2) The database within a database Mirroring session that is recovered, is online, and allows transactions to be processed against it.
Salting
Adding a value to a piece of data prior to applying a hash algorithm as a means to obfuscate the data and deter a brute force attack.
Securable
An object within an instance or database that permissions can be granted on.
Server Audit
Audit object that defines the location and logging properties for any audit specifications that are defined.
Server Audit specification
Audit definition that applies at a server scope.
Service master key
The symmetric key for the instance that defined by the service account and used as the root of the encryption hierarchy within an instance.
Signature
A digital signature that is applied to a code module allowing an escalation of permissions only when a module is being executed.
Symmetric key
A two-way key encryption algorithm that uses a single key to both encrypt and decrypt data
TCP endpoint
The endpoint transport that enables Transact-SQL, Database Mirroring, and Service Broker communications within SQL Server.
Of all of the different encryption methods, which provides the best balance between securing the data and performance?
Symmetric Key
What is the biggest drawback to encrypting a column/
It cannot be indexed and you can't search on the column.
What is the strongest of the Hash Algorithms?
MD5
Important: transmitting and storing passwords
It is a common misconception that passwords are sent to SQL Server in plaintext and that SQL Server decrypts the password stored to verify if the submitted password matches. SQL Server uses an MD5 hash to handle passwords. When a password is specified for an object, SQL Server applies an MD5 hash and stores the hash value. When you specify a password to access an object, the password is hashed using the same MD5 hash, the hashed password is transmitted in a secure channel and the hash value is compared to the hash stored value. Even an administrator who is running a trace cannot access the password.
Chapter 12: Lesson 1: Working with System Monitor
xx
Quick Check: What are the items that you can capture data for with the System Monitor?
You can capture object, counters, and counter instances.
Quick Check: What are the three counters that, by themselves, indicate a system problem?
System:Processor Queue Llength, Network Interface: output Queue Length, and Physical Disk: Avg. Disk Queue Length.
Quick Check: What types of data can System Monitor capture?
System monitor captures numberic data for performance counters that are defined for hardware or software components.
D. The amount of time that a processor is in use"
"A. Correct: The System:Processor Queue Length indicates the number of processes at a machine level that are waiting for a processor to be allocated.
Exam Tip:
For the exam, you need to know what various performance counters are for as well as be able to select the appropriate counters to capture to diagnose problems within an instance..
Lesson Summary:
"* System monitor is used to capture numeric statistics about hardware and software components.
Best Practices: Capturing Counter Logs
Viewing counters in a graphical interface provided by system monitor is useful only for looking at the immediate state of a system. It is much more useful to capture counters to a log to be used later for analysis. When setting up counter logs, it is recommended that you select counter objects instead of individual counters to ensure you have captured everything necessary for analysis.
Best Practices: Specifying an account for the counter log
At the bottom of the counter log definition screen you can specify the security credentials that the counter log runs under. You should always configure a counter log to run under a specific account with sufficient permissions to access the applications for which you re gathering counters. Failure to define a specific account is the most common cause of a counter log failing to start. the second most common causes of a counter log failing to start are password expiration, a locked out account, or a deactivated account.
What are the 3 types of counters in system monitor?
Object, counter and instance.
Chapter 12: Lesson2: Working with the SQL Server Profiler
xx
Quick Check: What are the three items that you define within a trace?
You define events, data columns, and filters within a trace.
Quick Check: Which events are commonly used to establish a performance baseline?
The RPC:Completed and SQL:BatchCompleted events are used to establish a performance baseline.
D. Profiler"
"1. Correct Answers: A and D
Exam Tip:
You need to know which events are used to solve various problems. For example, resolving deadlocks, blocking or stored procedure performance.
Lesson Summary: Working with the SQL Server Profiler
"* Profiler is the utility that allows you to interact graphically with the SQL Trace API
Best Practices: Logging a Trace to a File
Although SQL Server is optimized to handle large volumes of data changes, the SQL Trace application programming interface (API) can produce enough events to overwhelm even a server running SQL Server. You should never log trace events to the same instance for which you are capturing events. Because you can possibly overwhelm a server running SQL Server when you are logging events, the best logging solution is to log trace events to a file and then later import the file into the server for analysis.
Best Practices: Maximum Trace File Size
Setting the maximum file size to 50 MB provides a good trade-off for managing the size and number of trace files. A 50-MB file is small enough to copy or move quickly across any network while also containing a large enough set of events within a single file for analysis.
Best Practices: Running Trace Automatically
Although you can use Profiler to gather quick traces against an instance, it is much more common to set up an manage traces using code. By running traces via code, you eliminate all the overhead of a graphical tool while also providing a means of unattended tracing through the use of jobs within SQL Server Agent.
What are the five most coming used event groups in SQL Server Profiler?
Locks, Performance, Security Audit, Stored Procedures, and TSQL.
Caution: Correlating a counter log to a trace file
You can correlate a counter log with a trace file only if you have captured the StartTime data column in the trace.
Caution: Returning results into a Grid
Many organizations install the client tools on the server when SQL Server is installed. Although the installation of tools provides utilities for querying and troubleshooting an instance from the server console, you have to account for the overhead of the tools. Profiler can capture a very large number of events in a short amount of time and when loaded into the grid within Profiler can require the use of a large amount of memory. Although grids can present information in an easy-to-understand format, a grid has much more overhead than text-based format.
Chapter 12: Lesson 3: Diagnosing Database Failures
xx
Quick Check: What are the main places to find error and informational messages about the database engine?
you can use the SQL Server error log and Windows Application Event log for messages about the database engine. If you are auditing logins and logouts from the instance, you can use SQL Server error log and Windows Security log. The Windows System Event log can also provide hardware and operating system information useful to troubleshooting a database engine issue.
Quick Check: What are the three error codes that are raised when you run out of space for a database?
You receive a 9002 error when you run out of transaction log space and either an 1101 or 1105 error when the data files are out of space.
D. Killed process"
"Correct Answers: A and D
Exam Tip
For the exam, you should know common error codes for space issues, what each error means, and how to fix the problems which generated the errors.
Lesson Summary: Diagnosing Database Failures
"* SQL Server error log contains configuration information about instance startup, errors, stack dumps, and informational messages about your instance.
Best Practice: Out of Space Errors
Running out of space is a serious issue that should be avoided at all costs. Most database administrators (DBA's react to problems. Instead, you should be proactively managing your instances to ensure that you do not run out space. You can proactively manage space by creating alerts to notify an operator when the amount of free space has fallen below a specified threshold, usually 10 percent to 15 percent. You can also be notified immediately if a space error occurs by creating alerts in SQL Server Agent.
Note: Increasing Transaction Log Space
If a transaction log backup does not free up enough space in the log to be reused, then you need to add space to the transaction log by increasing the disk space available. The most common way to increase the disk space available is to add a second log file to the database on a disk volume that has free space.
Tip: File Growth
Before adding files to filegroups, you should first check if the data files have the auto-grow feature disabled. If auto-grow is disabled and the disk volume still has space, you can increase the space available just by increasing the size of the existing files.
Inside of Windows Event Logs, what are the three different event logs?
1. System Event log 2. Application Event logs 3. Security Event log
What are the four type of logs where you can find error and informational messages related to your server running SQL Server?
1. Windows Event logs 2. SQL Server error logs 3. SQL Server Agent logs 4. Database mail logs.
What is the most common errors that you encounter?
Running out of space for either the data or log files.
What's the first thing you should do whenever a transaction log fills up?
execute a transaction log backup.
Which system db is a special case that needs to be closely watched?
tempdb. Running out of space in tempdb causes serious problems for a variety of processes. If tempdb runs out of space, you can affect every database on an instance.
Chapter 12: Lesson 4: Diagnosing Service Failures
xx
Quick Check: Errors in which three databases prevent SQL Server from starting?
Errors in the master, tempdb, and mssqlsystemresource databases can prevent an instance from starting. Errors in all other database just make the problem database inaccessible until you can fix the problems.
Quick Check: What does a device a activation error means?
A device activation errors means that SQL Server either cannot find or cannot access a data or log file.
D. The SQL Server service account was locked out."
"1. Correct Answer: B
Exam tip:
You should focus on the most common error scenarios, most of which deal with security permissions. You need to know which utilities to use to troubleshoot the errors. You should also know how to rebuild a master database in SQL Server 2008, which is accessed using the new Installation Center even though it still uses setup. Favorite questions for the exam test whether you know the "new" or "improved" way of performing an action in the current SQL Server version vs. the method(s) from the previous version.
Lesson Summary: Diagnosing Service Failures
"* SQL Server Configuration Manager is used to configure and manage services, protocols, and the SQL Native Client
Best Practices: Managing SQL Server Services
It is always much easier to remember and enforce administration policies that do not have a lot of exceptions. You can safely change some of the options for SQL Server services using the Windows Services console; however, others cannot be safely changed this way. Therefore, most environments that i have worked in dictate that all changes to a Sql server service must be made using the Sql server configuration manager.
Note: Device Activation Errors
Anytime you see a device activation error, SQL Server could not access a data or log file for a database. Device activation errors for master and tempdb prevent the instance from starting, whereas device activation errors for any other database only make the database unavailable. You should always investigate any device activation error because you either have a failing disk subsystem or an administrator is improperly shutting down a storage system while it is being used.
Note: Experienced SQL Server DBAs
Beside the additional years in the job of DBA, the biggest thing that separates someone with experience from someone without experience is that the experience person has managed to survive things that have gone wrong. I'll never admit many of the things I've managed to survive over the decades. By following many of the best practices, sidebars, notes, and cautions you will find in this book, I hope that you can avoid many of the mistakes I and many others have mode over the years.
Note: Startup Mode
A service with a startup mode of Disabled shows as other in the start mode column in the SQL Server Configuration manager main window.
If a SQL Server service is in a stopped stop, where can you look to see what happened?
SQL Server error log and Windows application event log
if SQL Server starts and then immediately shuts down -- what are you most likely dealing with?
An issue with the SQL Service account -- Account deleted, locked out, disabled or expired.
If you cannot connect to your SQL Server instance, what is the first thing you do?
Check to see whether the SQL Server is running? If the service is in a starting state, SQL Server is in the process of starting up and should be available in a short amount of time. If is stopped -- that's a problem too.
If you still can’t get the service only after verifying the SQL Server Service account and the disk permissions -- then what should you look at?
Check to see if the startup parameters were changed improperly.
If you've checked all the SQL Server Service accounts issue and they're good, but the SQL Server is shutting down right after starting -- what is the next thing to check?
It's most likely a permissions issue. When SQL starts up, the master database is first brought online and then the tempdb is re-created. The tempdb database causes a write to occur to the folder that store the data and log files for the tempdb database. if SQL Server cannot re-create the tempdb, the service shuts down.
Lastly -- if you still can't get the database up after checking the service account, checking the storage permission and the Startup parameters -- then what do you do?
You should start repair Wizard -- you have a corrupt master database.
Caution: SQL Server Clustered instances
The start mode for all SQL Server services for a clustered installation should be set to Manual. Do not change the start mode for any services in a clustered installation. The services are being controlled by the cluster service, and setting any of the services to an Automatic start mode creates significant problems on a cluster if something restarts.
Caution: System Databases
SQL Server has four system databases on every instance; master, model, msdb and tempdb. If you have configured replication, you also see a database named distribution. SQL Server has a sixth system database, first introduced in SQL Server 2005, named mssqlsystemresource. The mssqlsystemresource database contains most of the stored procedure, function, DMB, and other code that ships with SQL Server. The mssqlsystemresource database is critical to SQL Server operations and prevents a server running SQL Server from starting. Unfortunately this database is hidden. So, you need to look for device activation errors related to the mssqlsystemresource database as well as the master and tempdb database.
Chapter 12: Lesson 5: Diagnosing Hardware Failures
xx
Quick Check: What errors do you see if there is a fault in either the memory or processor?
If you are encountering memory or process problems, you see STOP errors. If there is a memory error encountered when the computer is booting, you see a POST error. Both STOP and POST are accompanied by a blue screen with additional diagnostic information.
Quick Check: What errors do you see if your disk storage goes off-line underneath a database?
When a disk volume that databases are stored on goes off-line, SQL Server begins logging device activation errors.
D. You have a bad memory module in the server"
"Correct Answer: C
Exam Tip:
For the exam, you need to know the most common errors related to the failure of hardware components.
Lesson Summary: Diagnosing Hardware Failures
"* A severe failure of the disk system that takes a storage volume off-line logs device activation errors and the affected databases become inaccessible.
If the errors get sever enough or a volume or an entire array to go off-line -- where would you being to see device activation errors?
SQL Server Error log as well as the Windows Application Event Log.
If the storage for your database is locally attached -- what can you use to determine the state of disk volumes?
Disk Management folder within the Computer Management Console.
What is the first indication that you have of a failure in the disk system?
Errors logged in the Windows System Event Log or within the logging system for your Storage Area Network (SAN) or Network Attached Storage (NAS) array.
What is the first indication that you normally have for memory or processor issues?
When a stack dump is generated.
What is your first line of defense against disk failure in your SQL Server setup?
Your disks should always be in a RAID. Ideally, you'd have an extra disk next to the RAID array, so that when if one disk in the array were to fail, you could take it out, put in the spare disk and spin the whole system back up.
Where will you see an error for a stack dump denoted?
In the SQL Server error log.
CAUTION: S.A.N. and N.A.S. Arrays
When your databases are stored on SAN or NAS arrays, you should always use the specialized utilities that ship with your storage array to diagnose and repair any disk errors.
Chapter 12: Lesson 6: Resolving Blocking and Deadlocking Issues
xx
Quick Check: 1. What are the most common lock modes and types that are available?
1. The three most common locking modes are shred, exclusive and update. The three lock types are row, page and table.
Quick Check: 2. How does a deadlock occur?
2. A deadlock requires at least two processes that are both modifying data. Each process acquires an exclusive lock on a resource and then attempts to acquire a shared lock on the same resource exclusively locked by the other process.
Quick Check: What are the valid input sources for DTA to analyze?
DTA can analyze queries and stored procedures that are stored in either a file or a table. The most common tuning source for DTA is a trace output file.
D. Sp_who2 system stored procedure"
"1. Correct Answers: B and D
Exam Tip
For the exam, you need to know the locks that can be acquired and how lock escalation can lead to either blocking or deadlocking issues. If a block or deadlock occurs, you also need to know how to resolve the problem.
Exam Tip:
You need to know how each of the tuning options affect the recommendations that DTA makes.
Lesson Summary: Chapter 12: Lesson 6: Resolving Blocking and Deadlocking Issues
"* The Lock Manager is resonsible for managing the locks that SQL Server uses to maintain data consistency while allowing multiple users to manipulate data concurrently.
Best Practices: Handling Deadlocks
Deadlocks are transient situations basically caused by bad timing. If the queries that the two sessions were running completed in less time, the deadlock might have been avoided. Likewise, if one process ad been started a small amount of time later, the deadlock might never have occurred. Because a deadlock is a transient locking conflict state, your applications should be coded to detect a 1205 error and then immediately reissue the transaction because it is a very strong possibility that the process will not deadloc the second time the command is executed.
Note: Lock escalation
It is very common misconception that SQL Server promotes row locks to page locks. Row locks are promoted only to table locks.
Counter log
A set of numeric values corresponding to various properties of hardware and software that can be captured using System Monitor.
Deadlock
A transient situation that occurs when two processes that are attempting to modify data acquire competing locks in such as way that does not allow either process to complete.
Isolation level
Locking behavior definted by the American National standards Institue (ANSI) which SQL Server uses to determine how the Lock Manager hsould behave
Lock escalation
The process by which the Lock Manager promotes a lock. Locks can be promoted from row to table or from page to table.
SQL Trace
The event API that SQL Server exposes to allow you to capture information about the opertional state of an instance.
Trace Event
An action that is executed within SQL Server that is exposed via the SQL Trace API. Examples of events are a file growing, query being executed, or a connection to the instance.
Can you have multiple exclusive locks on the same data?
No. As the name implies, you can have only one exclusive lock on a resource at a time, and all other process needing to access the resource must wait until the exclusive lock as been released.
Can you have multiple read locks on the same data?
Yes.
Describe the Transaction Isolation Level of READ COMMITTED
This is the default isolation level for SQL Server. An exclusive lock blocks both shared as well as exclusive locks. A shared lock blocks an exclusive lock. Shared locks are released as soon as the data has been read.
Describe the Transaction Isolation Level of READ SERIALIZE
All the restrictions as the REPEATABLE READ isolation level. In addition, you cannot insert a new row within the keyset range currently locked by the transaction. Locks are held for the duration of the transaction.
Describe the Transaction Isolation Level of READ UNCOMMITTED
Data can be read that has not been committed. Although an exclusive lock still block another exclusive lock, any read operations ignore an exclusive lock.
Describe the Transaction Isolation Level of REPEATABLE READ
Exclusive locks block both shared and exclusive locks. Shared locks block exclusive locks. Shared locks are held for the duration of a transaction.
Describe the Transaction Isolation Level of SNAPSHOT
Use the row versioning features to keep shared and exclusive locks from blocking each other while maintaining data consistency. A read operation retrieves data from the version of the row prior to the start of a data modification operation.
How does Lock Manager determine which type of lock to acquire?
It uses distribution statistics.
How does SQL Server Lock Manager determine if a lock is to be acquired on a page?
According to the two percent rule -- which is designed to minimize the number of locks needing to be acquired and managed -- if SQL Server determines that more than two percent of the rows on a page will need to be accessed, a page lock will be acquired.
How does SQL Server Lock Manager determine if a lock is to be acquired on a table?
If more than 2 percent of the pages of a table will need to be accessed, a table lock will be acquired.
In most locking situations in SQL Server -- what are the locking modes you deal with?
"1. Shared
What are Isolation Levels?
Isolation Levels affect the way SQL Server handles transactions, as well as the duration of locks acquire.
What are the five isolation levels that SQL Server has.
"1. READ UNCOMMITTED
What are the scopes that locks have?
"1. Session
What are the three different lock types that can be acquired?
"1. Row
What if the distribution statistics by Lock Manager are inaccurate, or if there are no distribution statistics?
Lock Manager can promote locks to table level locks. IE. Row lock to table lock, Page lock to Table lock.
What is a shared lock?
It is a lock acquired for read operations to prevent the data being read from changing during the read.
What is an exclusive lock?
It is a lock acquired on a resource that is being modified and is held until the modification is complete.
What is an update lock?
An update lock is a hybrid of a shared and exclusive lock. Although an update lock is acquired for any update, update locks can be acquired during any action that requires SQL Server to first locate the piece of data to be modified. An update lock starts out by acquiring a shared loc on resource until it finds the piece of data that needs to be modified, the shared lock is then changed to be an exclusive lock while the data is being changed.
What is Lock Manager?
An internal process that determines the appropriate lock to acquire, how long to retain the lock and arbitrates when processes are allowed to modify data such that read are always consistent.
What mechanism does SQL Server use to maintain data consistency for multiuser access?
locking
What type of algorithm is the Lock Manager use to determine the order in which to execute locks?
FIFO algorithm -- first in, first out. Each process that executes a command needs to acquire a lock. The locks being requested are queued up by the Lock Manager in the order that the request is made. So long as the requested resource does not have a lock or has a lock that does not conflict with the lock being requested, the Lock Manager grants the lock request.
Why?
Because read operations cannot introduce data inconsistencies, you can have multiple shared locks on the same resource at the same time.
Quick Check: 1. What are the objects that are used for a Resource Governor implementation?
1. Resource Governor relies on a user-defined classifier function in the master database to assign a connection to a workload group. Each workload group is assigned to a resource pool that manages CPU and memory resources.
Quick Check: 2. What resources can Resource Governor control?
2. Resource Governor can be used to manage CPU and memory resources.
D. A trace file containing SP:StmtCompleted events"
"Correct Answers: A, C, and D
Exam Tip:
You need to know the resources that Resource Governor can control as well as how to test and trouble shoot a classification function.
Lesson Summary: Chapter 13: Lesson 1: using the Database Engine Tuning Advisor
"* DTA is used to analyze a query workload against a database to make recommendations on structures to create or drop, which might improve performance.
Note: Connection Classification
Classification occurs at the time a connection is created. Therefore, the only way you can limit resources is based on properties of the connection. You cannot limit the resource consumption of individual queries or even types of queries.
Note: Default Resource Pools
If a classifier function is not associated to Resource Governor, or the classifier function does not exist, returns NULL, or returns a nonexistent workload group, the user session is associated to the datault resource pool.
Note: Resource Allocation
All connections running within a resource pool are treated with equal weight, and SQL Server balances the resources available to the resource pool across all requests currently executing within the pool.
Caution: Limiting Resources
Even with Resource Governor enabled, SQL Server still seeks to maximize the resources available to all concurrently executing requests. If you set a maximum limit for a resource pool, connections assigned to the resource pool can use more resources than the configured maximum. If other sessions that are executing do not need all the resources, any amount of free resource is allowed to be used by any session, even if that causes the session to exceed the resource limits of its assigned resource pool.
Lesson Summary: Chapter 13: Lesson 2: Working with Resource Governor:
"* Resource Governor is used to limit the CPU, memory, or both allocated to one or more connections.
Quick Check: 1. What is the difference between sys.dm_db_index_operational_stats and sys.dm_db_index_physical_stats?
1. Sys.dm_db_index_physical_stats returns fragmentation statistics for each index, and sys.dm_dm_index_operational_stats returns locking, latching and access statistics for each index.
Quick Check: 2. Which DMV can you use to retrieve execution statistics for each connection currently executing a command?
2. The sys.dm_exec_requests DMV returns one row for each currently executing command.
D. Sys.dm_db_missing_index_details"
"1. Correct Answer: C
D. Limit the memory utilization for marketing users."
"1. Correct Answer: B
Exam Tip:
For the exm, you need to know the purpose of the main set of DMVs and how to use each of them to diagnose and troubleshoot performance issues.
Lesson Summary: Chapter 13: Lesson 3: Using Dynamic Management Views and Functions.
"* The sys.dm_db* DMVs provide general space and index utilization information.
Terminology Conventions:
For simplicity, the entire set of instrumentation code that is available within SQL Server is referred to collectively as DMV's, regardless of whether you are using a view or a function.
More Info: Wait Types
Although wait statistics are an extremely valuable pice of information for diagnosing performance issues, in the almost 10 years since detailed information has been available, Microsoft still has not documented the wait types, all of which have been extremely cryptic names. The best resource available for understanding wait types and how to resolve issues uncovered by wait types.
Chapter 13: Lesson 4: Working with Performance Data Warehouse
xx
Quick Check: 1. Which features is the Performance Data Warehouse based on?
1. The Performance Data Warehouse is built upon the Data Collector infrastructure. Data collection is based on SSIS packages and SQL Server Agent jobs.
Quick Check: 2. What collector types are available in SQL Server 2008?
2. SQL Server 2008 ships with T-SQL Query, SQl Trace, Query Activity, and Performance Counter collector types.
D. Implement Microsoft System Center Operations Manager 2007."
?
Exam Tip:
For the exam, you need to know what the purpose of the Performance Data Warehouse is, the components that data collection is based upon, and the information that can be collected.
Lesson Summary: Chapter 13: Lesson 4: Working with Performance Data Warehouse
"* The Data Collector is a new infrastructure component available in SQL Server 2008 that is based on SSIS packages and SQL Server Agent jobs.
Classification function
A function in the master database that assigns a workload group to each connection.
Collection item
A query or performance counter that you want to track in the Performance Data Warehouse
Collection set
A group of collection items that are combined together and managed by a Data Collector
Collection target
The instance, database or objct that a collection set is executed against.
Data collector
The Data Collector is a new infrastructure component available in SQL Server 2008 that is based on SSIS packages and SQL Server Agent jobs.
Data provider
the connection libraries used by the Data Collector to connect to collection targets.
Dynamic Management Function (DMF)
A function which ships with SQL Server that provides configuration, object or diagnostic information.
Dynamic Management View (DMV)
A view which ships with SQL Server that provides configuration, object, or diagnostic informaiton.
Resource Pool
The object within a Resource Governor configuration that defines the CPU and memory resources that can be allocated by a workload group.
Workload File
A file or table which contains SQL statements for DTA to analyze.
Workload group
An object used by Resource Governor to associate a connection to a resource pool.
Chapter 14: Lesson 1: Designing Windows Clustering
xx
Quick Check: 1. What is the main difference between a standard cluster and a majority node set cluster?
1. A standard cluster uses a shared quorum database. A majority node set cluster maintains a separate quorum database on each node that is synchronized across all nodes. The majority of nodes (more than 50 percent) must be online for a majority node set cluster to function.
Quick Check: 2. What are some examples of cluster resources?
2. Cluster resources can be hardware, software, or networking. Some examples are IP addresses, network names, disk mounts, and Windows services.
Quick Check: 3. How many network connections does a node need for clustering? Why?
3. Each node needs at least two network connections: One connection is used for public communications to applications on the network, and the other is used for private internal communications within the cluster.
Quick Check: 4. How does the health check within a Windows cluster work?
4. The basic health check that is performed is called a LooksAlive test. This test consists of each node pinging the others.
Quick Check: 5. Which types of disk configuration are supported for clustering?
5. Clustering supports basic disks. Dynamic disks are not supported. Disks must also be external to each node within the cluster, so disks mounted locally within a cluster are not visible to any resource within a cluster.
D. Windows Server 2003 Enterprise Edition"
"1. Correct Answer: C
D. Windows Server 2003 Enterprise edition with a majority node set cluster"
"2. Correct Answer: A
D. Terminal Services service"
"3. Correct Answer: B
Lesson Summary: Chapter 14: Lesson 1: Designing Windows Clustering
"* You build a standard cluster using a single quorum database stored on a shared disk array. You build a majority node set cluster with a copy of the quorum database on all nodes within the cluster.
Best Practices: Microsoft Distributed Transaction Coordinator (MS DTC)
MS DTC, which you need to add to every Windows cluster you build, ensures that operations requring enlisting resources such as COM+ can work in a cluster. It has been recommended that you always configure MS DTC to use a disk that is different from the quorum disk or any disk used by SQl Server or other applications. We find this to generally be a waste of very limited disk resources. If you are running applications in a cluster that make very heavy use of MS DTC, you need to dedicated a disk for MS DTC operations. If you are not running applications that require COM+, you can safely configure MS DTC within the cluster group adn set its dependencies to Quorum drive.
Best Practices: Private Network Connection
"You should configure the following items on the private network connection:
Note: Cluster Analysis Warnings
Because of the way Virtual Server handles disk resources internally, you can receive some warnings when a cluster is configured. This is normal and does not affect the operation of your cluster. As long as you do not receive an error (the progress bar turns red), your configuration succeeded, and you have a fully functional cluster.
Note: Disk Initialization
When you select Disk Management on Node2, the Initialize and Convert Disk Wizard does not appear because the disks already have a signature written to them. You do not need to format the disks because you already performed this step when you configured Node1. You also do not need to specify drive letters because Node2 picks them up from the cluster after you configure it.
Note: Local Quorum
The location of the quorum is %SystemRoot%\Cluster\QoN.%ResourceGUID%$\%ResourceGUID%$\MSCS. A share is created on each node that is named \\%NodeName%\%ResourceGUID%$. You should not modify this directory or changethe permissions on this directory or share in any way.
Note: Operating System
Check with your storage area network (SAN) vendor to determine whether your nodes can be booted from the SAN. If your nodes cannot be booted from the SAN, or if you are using direct attached storage, you must install the operating system on an internal hard drive that you use to boot the node. Installing the operating system on an internal hard drive on each node is the most common configuration.
Note: Remote Procedure Call (RPC)
All health checks within a cluster use the remote procedure call (RPC) service. If the RPC service is unavailable or has been disabled, all health checks within a cluster fail. You must ensure that the RPC service is enabled and set to start automatically on all nodes within a cluster.
Note: SCSI adapter ID for node2
Each node must use a different SCSI adapter ID. Because Node1 is configured with a SCSI adapter ID of 6 for each SCSI adapter, you must configure Node2 with a SCSI adapter ID of 7 for each node.
Note: Specifying a Quorum
When you configure a cluster on physical hardware, the disk that the New Server Cluster Wizard selects by deafault as the quorum is the first disk added to Node1 that is not a locally attached disk. Virtual Server selects the first drive letter in alphabetical order. You can use the Cluster Configuration Quorum dialog box to specify a local quorum that is used when building a single node cluster for testing. This dialog box is also wher eyou can change the type of lcuster from teh standard cluster you are building to a majority node set cluster by choosing Majority Node Set from the drop down list. If you chose Majority Node Set form this drop-down list, the New Server Cluster Wizard creates a quorum database on each node in the cluster.
Note: Virtual Server
You can use Virtual Server and Microsoft Virtual PC to simulate hardware configurations. Unlike Virtual PC, Virtual Server supports Windows clustering and you can use it to build a SQL Server failover cluster.
Caution: Antivirus Software
Antivirus software ahs become very prevalent on database servers. In a cluster environment, you need to configure the antivirus scanning so that it does not interfere with cluster operations. You must exclude the MSCS directory and all the directories containing data files from scanning. During a failover, the disks are mounted on the node that a group is failing over to, which triggers the antivirus software to begin scanning the disk. If the antivirus software begins scanning a database file before SQL Server can open it, the recovery of the database is delayed until the file has been fully scanned. Because database files are normally very large, scanning can add a considerable amount of time to the failover process.
Caution: Basic Disks
Follow the prompts in the dialog box to set up the disks. Make absolutely certain that you do not convert the disks to dynamic. Clustering supports only basic disks; if you convert the disks to dynamic disks, you cannot configure your cluster and will have to start at the beginning with new disks.
Caution: Encrypted Operating System
To support encryption of the file system in a cluster configuration, Kerberos must be enabled, and the computer accounts, along with the cluster service account, must be trusted. If you choose to encrypt the file system, you must also account for the performance degradation that all read and write operations incure because of encrypt/decrypt processes.
Caution: Initial Configuration
It is critical that you be very careful with the order in which you start and stop Node1 and Node2 during the subsequent steps in this practice. If you ever run both Node1 and Node2 at the same time, before you configure the cluster you will corrupt the disks adn not be able to complete the steps. You must check and double-check the state of Node1 and Node2 before stopping or starting either one.
Important:
A complete discussion of Virtual Server is beyond the scope this book. You can find step-by-step instructions for performing each of the actions required to configure the base environment in the Virtual Server documentation. If you have physical hardware capable of clustering, you can perform the practices on this hardware by skipping the steps specific to configuring the Virtual Server environment.
Important: Compatible hardware
The most frequent cause of autages for a cluster is hardware that has not been certified for clustering. To ensure that the hardware you are deploying is certified for clustering, it must appear in the Windows Catalog. The entire hardware solution must specifically designate that it is certified for clustering, so you need to ensure that you check the clustering categories of the Windows Catalog (which can be found at www.microsoft.com/whdc/hcl/default.mspx).
Important: Lesson Practices
You use Virtual Server for all the practices in this chapter. To follow the steps in the practices, you have to create three virtual machines using Virtual Server, and you must install all three machines ith Windows Server 2003 Standard edition SP2 and later or Windows Server 2008 Standard and later. You should configure one of the virtual machines as a domanin controller, and the other two machines as member servers in the domain. You need to allocate 512 MB of memory to the two virtual machines that you configure as member servers, and configure the domain controller with 192 MB of RAM. To meet the hardware requirements for this Virtual Server configuration, you need a minimum of 1.5 GB of RAM on the host machine, and the disk drives should be at least 7200 RPM for reasonable performance.
More Info: Windows Server Versions
At the time of writing, Windows Server 2008 was just being released onto the market. The exercises in this chapter, as well as detailed Windows clustering information, are based primarily on Windows Server 2003, with information from Windows Server 2008 incorporated where available. If you are deploying Windows Server 2008, please refer to the Windows Server 2008 documentation for details on clustering features.
Chapter 14: Lesson 2: Designing SQL Server 2008 Failover Cluster Instances
xx
Quick Check: 1. Which types of Windows accounts and groups can you use with a SQL Server cluster instance?
"1. Domain users and domain groups must be used with SQL Server failover cluster instances. The SID for accounts and groups used must be resolvable across all nodes in the cluster. The SID for a local account or group cannot be resolved across machines.
Quick Check: 2. With how many clustered instances can a single drive letter be used?
2. Although a clustered instance can address multiple drive letters, you can configure a given drive letter for only a single instance. This configuration prevents the possibility of having SQL Server running on one node while a different node has ownership of the disk resources required by the clustered instance.
Quick Check: 3. What are the two health checks performed in a cluster, and which operations are executed?
3. The LooksAlive check executes every 5 seconds by default and issues a ping from all nodes to the IP address of the SQL Server clustered instance. The IsAlive check executes every 60 seconds by default, connects to the SQL Server clustered instance, issues SELECT@@SERVERNAME, and must receive a valid result set.
D. Database snapshots"
"1. Correct Answer: C
D. Replication"
"1. Correct Answers: B and D
D. Windows 2003 Enterprise edition"
"2. Correct Answer: D
D. Replication"
"2. Correct Answer: D
D. Windows Server 2003 Datacenter edition with SQL Server 2008 Datacenter"
"3. Correct Answer: A
Lesson Summary:
"* You can configure SQL Server as either single or multiple-instance clusters.
Best Practices: Balancing Security with Manageability
Security best practices woud create a domain-level group for each type of service and for each SQL Server clustered instance installed. Management simplicity would create a domain-level group for each of the three services, and all SQL Server failover cluster instances wouldspecify the same set of domain groups. You need to determine where to balance a very secure (but highly complex) domain group scheme with a less complex (but less secure) domain group scheme.
Best Practices: SQL Browser Service
SQL Server 2008 installs a service called SQL Broweser. If you have installed named instances in a cluster, the SQL Browser service must be running to resolve these names. If you do not have named instances, you should disable the SQL Browswer service.
Note: Prerequisites
You must have already installed .NET Framework 2.0 SP1 on both nodes in the cluster before proceeding with this practice. If you are configuring a Windows Server 2003 cluster, you also need to download and install the KB937444 hotfix.
Cluster group
A container that combines multiple cluster resources into a logical management unit.
Cluster name
The DNS resolvable name of the cluster
Cluster node
A piece of hardware that is capable of running a Windows cluster. It can be a physical server that is listed in the clustering section of the Windows Catalog, or a machine partition within an Enterprise-level piece of hardware (for example, Unisys ES7000)
Cluster resource
A hardware, software, or network resource that can be configured and managed within a cluster.
Majority node set cluster
a cluster that stores a local copy of the quorum database on each node in the cluster. This type of cluster is supported only on Windows Server 2003 and later.
Quorum database
Contains all the operational configuration information for the cluster. Also called Quorum.
Standard cluster
A cluster that is built with a single quorum database stored on a shared disk array.
Caution: Service Startup
The start-up type for SQL Server clustered services should be Manual. The Windows cluster needs to have control over the services. If you change the start-up type to Automatic, you will cause errors with the cluster operations.
Important: Cluster Group Specification
Unlike the previous three versioins of SQL Server, the SQL Server 2008 setup routine does not allow you to specify an existing cluster group to install services to, even though the group specified dows not contain a SQL Server instance. So to get around this problem, make sure that you do not have a group already created with the same name as you want when installation completes.
Chapter 15: Lesson 1: Database Mirroring
xx
Quick Check: 1. What are the three Database Mirroring roles, and what functions do they serve?
1. The principal database currently serves the database to applications. The mirror database is in a recovering state and does not allow connectionsa nd the optional witness server is an instance of SQL Server that is used for automatic failure detection and failover from a primary to a mirror database.
Quick Check: 2. What are the three operating modes for Database Mirroring?
2. The three operating modes are High Availability operating mode, High Performance operating mode, and High Safety operating mode.
Quick Check: 3. Which mode is not recommended for normal operations?
3. High Safety operating mode is not recommended for normal operations because its synchronoous transfers have a high-performance impact without the benefit of automatic failover.
D. Monitor"
"1. Correct Answer: B
D. Service multiple Database Mirroring sessions"
"2. Correct Answers: B and D
D. ROLE set to PARTNER"
"3. Correct Answers: B and D
Exam Tip:
For database mirroring, you must configure the principal, mirror, and witness endpoints on different SQL Server instances
Lesson Summary:
"* Database Mirroring involves databases in three different roles. The database in the principal role is available for connections and enables data modifications. The database in the mirror role is inaccessible to connections and receives transactions issued at the principal. The witness is defined for a SQL Server instance and is used with the High Availability operating mode to arbitrate a failover.
Best Practices: High Safety Operating Mode
Because High Safety operating mode's synchronous transfer can affect application performance does not offer the benefit of automatic failover, this operating mode isn ot recommended for normal operations. You should configure a Database Mirroring session in High Safety operating mode only when you need to replace the existing witness server. After you have replaced or recovered the witness, you should change the operating mode back to High Availability operating mode.
Note: GUI Alternative
SSMS has a GUI interface to configure Database Mirroring endpoints and to configure, fail over, pause, resume, and reconfigure the operating mode of a Database Mirroring session. You can access theis GUI by right-clicking a database, selecting Properties, and then selecting the Mirroring page in the Properties dialog box.
Note: Retrieving Endpoint Information
You can retrieve endpoint information by querying the sys.database_mirroring_endpoints dynamic management view.
Note: Serving the Database
When an instance has a database that allows transactions to be processed against it, it is said to be "serving the database."
Note: Specifying an Endpoint Name
Specify Mirroring as the endpoint name, which standardizes the naming convention for these types of endpoints so that you can distinguish them easily from other types of endpoints.
Note: Split Brain Problem
if the mirror were allowed to determine that it should serve the database by itself, it could introduce a situation whereby the database would be accessible to transactions on more than one server. This is referred to as a "split-brain problem."
Note: Transient Operating States
The principal and mirror roles are transient operating states within a Database Mirroring session. Because the databases are exact equivalents and are maintained in synch with each other, either database can take on the role of principal or mirror at any time.
Caution: Filestream Data
You need a version of AdventureWorks database from a previous edition of SQL Server, because the SQL Server 2008 version of AdventureWorks database contains FILESTREAM data, and the Database Mirroring is not compatible with FILESTREAM data.
Caution: Ping Test Limitation
A database can become inaccessible because of a runaway transaction or other operations. However, Database Mirroring does not detect these failures; only a failure of the ping test is considered a failure. You also have to balance carefully the number of mirroring sessions that are configured on a single instance. Each database participating in a mirroring session pings the other database every couple of seconds to determine whether it is still connected and available. Both the principal and the mirror must ping the witness every couple of seconds to determine whether it is still connected and available. The failure of ping test causes a failover in High Availability operating mode. Having a large number of databases participating in mirroring sessions on single server increases the possibility of an anomalous failover because of a saturation of the network.
Caution: Port Numbers
If you instances are installed on the same server,t heport number needs to be different for each enpoint to not create a conflict on the TCP/IP stack.
Important: Database-Level vs. Server-Level Roles
Principal and mirror roles occur at a database level and must be defined within SQL Server 2008 instances that are running either SQL Server 2008 Standard or Enterprise. However, you define the witness role at an instance level. The instance of SQL Server 2008 that you use for the witness server can be running any edition, including SQL Server 2008 express, which is why you refer to a principal or mirror database but a witness.
Important: Hotstandby: Witness must be online
Automatic failover can occur only if the witness server is online. If the witness is offline, automatic failover does not happen. This means that you can use High Availability operating mode to provide a hot standby server only when the witness server is online. Otherwise, you have a warm standby configuration.
More Info: Endpoints
For more information about defining endpoints, please refer to Chapter 8, "Designing SQL Server Endpoints."
Chapter 15: Lesson 2: Initializing Database Mirroring
xx
Quick Check: 1. What is the process for preparing a database to participate in a Database Mirroring session?
1. Change the recovery model to Full; back up the primary database; restore to the instance hosting the mirror database with the NORECOVERY option; and then copy all system objects such as logins, linked servers and so on.
D. Restore the database with NORECOVERY"
"1. Correct Answers: B and D
D. Bulk-logged recovery model"
"2. Correct Answer: A
D. Manual failover"
"3. Correct Answers: B and C
Exam tip:
Database Mirroring cannot be configured against databases that have a FILESTREAM filegroup
Lesson Summary -- Chapter 15: Lesson 2: Initializing Database Mirroring
"* You must set all databases that participate in database Mirroring to the Full recovery model to ensure that all transactions are applied to the mirror.
Best Practices: Full Recovery Model Required
Because Database Mirroring cannot be configured against a database that has a FILESTREAM filegroup, it is strongly recommended that you separate all FILESTREAM data into a dedicated database.
Note: Finding Database Mirroring Endpoints
"If you do not remember the endpint addresses, you can retrieve them using one of two different mechanisms:
Note: Full Recovery Model Required
You cannot configure Database Mirroring if the participating databases are not in the Full Recovery model. And you cannot change the recovery model of a database participating in Database Mirroring.
Note: GUI Alternative to configuring Database Mirroring
You can right-click the AdventureWorks database on INSTP, select the Mirroring tab, and configure a Database Mirroring session using the GUI
Note: Initialising the Mirror
Instead of performing a backup to initialize the mirror, I always use the last full backup of the primary database and then apply all subsequent transaction logs. After all log backups are taken, I execute a final transcation log bacups to capture all remaining transactions and then initiate Database Mirroring. An alternative method uses Log Shipping to maintain the two databases in synchronization and as the initialization mechanism for Database Mirroring. In this case, you might still ahve to apply at least one tansaction log backup before you can initiate the Database Mirroring session.
Note: Using SSIS Tasks to Transfer Objects
To transfer objects to the instance hosting the mirroring database, you can use SSIS, which includes the Transfer Logins task for transferring logins from one instance of SQL Server to another while keeping any passwords encrypted. SSIS also provides tasks for transferring SQL Server Agent jobs, error messages, and other types of objects.
Quick Check: 1. How do you fail back from a forced failover?
1. If the partners were synchronized at the time of the failover, you can apply transaction log backups to roll the failed partner forward in time, and then Database Mirroring finishes the resynchronization process. If the partners were not synchronized at hte time of the failover, you need to remove mirroring and reinitialize.
Quick Check: 2. Which server objects are your primary concern in the event of a failover?
2. The instance containing the mirror database must also have all the logins present that are required for applications and users to connect to the database. If the appications use linked servers, they also need to exist to prevent application failures. Other server objects such as SSIS packages and jobs are not as critical during the failover and can generally be addressed shortly after a failover after all applications are online.
D. Replication"
"1. Correct Answer: A
D. Log Shipping"
"2. Correct Answer: B
D. Snapshot replication"
"3. Correct Answer: C
D. Log Shipping"
"4. Correct Answer: A
D. Manual failover"
"1. Correct Answers: A and D
D. Manual failover"
"2. Correct Answers: B and D
Lesson Summary -- Chapter 15: Lesson 3: Designing Failover and Failback Strategies
"* Database Mirroring can take care of the failover from the principal to the mirror, automatically promote the mirror and emote the failed partner, and even perform an automatic incremental resynchronization in some cases. However, this capability is wasted unless you are ensuring that logins, linked servers, and other supporting objects are present on the mirror at the time of failover.
Best Practices: Falling Back
Copy in paragraph from 472
Database Mirroring
A SQL Server 2009 high-availability technology configured between principal and mirror database (and an optional witness server) which maintains the close synchronization of data and the database schema and also offers that option of automatic failover.
Database Mirroring role
Defines the operating state of each participant in a Database Mirroring session. There are three possible roles: principal, mirror and witness.
Database Mirroring session
A principal database, mirror database, and optional witness server confirgured to exchange data using one of the three available operating modes.
Database partner
Refers to a member of a pair of principal and mirror databases participating in a Database Mirroring sessioin.
Endpoint
A connection mechanism that is used by any process needing to access the SQL Server engine to process transactions. For Database Mirroring, you create TCP endpoints on every instance involved in the Database Mirroring session.
High Availability operating mode
The Database Mirroring operating mode that requires a principal, mirror, and witness. Synchronously transfers data from the principalto the mirror, and enables automatic failure detection and automatic failover so long as the witness server is accessible when the failure occurs.
High Performance operating mode
The Database Mirroring operating mode that requires only a principal and a mirror, asynchronously transfers data from the principal to the mirror, and allows only a manual failover.
High Safety operating mode
The Database Mirroring operating mode that requires only a principal and mirror, synchronously transfers data from the principal to the mirror and allows only for a manual failover.
Mirror
The database within a Database Mirroring session that is in a recovering state, does not allow any connections, and is receiving changes from the principal.
Mirror failover
The process whereby the mirror database is promoted to the principal and recovered. This process also automatically demotes the principal to become the mirror within the Database Mirroring session.
Operating mode
The configuration that governs how a Database Mirroring session synchronizes transactions and which failover options are available. You can choose from three operating modes: High Availability, High Performance, or High Safety.
Principal
1. An object that can own other objects within an instance. The SQL Server object that defines a security access path for a user. (2) The database within a database Mirroring session that is recovered, is online, and allows transactions to be processed against it.
Transparent Client Redirect
The process that describes the functionality built into the new MDAC connection library that ships with Visual Studio 2005 and allows principal and mirror connections to be cached in the connection object. Failure of the principal causes this code to redirect a client connection to the mirror without developer intervention or custom coding.
Witness server
The arbiter within the High Availability operating mode, also know as a witness. The purpose of the witness is to guarantee that the database annot be served on more than one instance at the same time.
Chapter 16: Lesson 1: Overview of Log Shipping
xx
Quick Check: 1. What are the modes that a standby server can be in?
1. Standby Mode and No Recovery Mode.
Quick Check: 2. What are the three components of log shipping and what are their purpose?
2. The primary databse is accessible an accepts transactions. The secondary database has a continous chain of transaction logs appliaed to keep it up to data with the primary. The monitor instance sends alerts when the log shipping session is out of sync.
D. Standby"
"1. Correct Answer: B
Exam Tip:
Log shipping relies on transaction log backups. If the recovery model is changed to Simple, log shipping ceases to funtion. Similarly, issuing a BACKUP LOG…WITH TRUNCATE_ONLY command also breaks log shipping.
Lesson Summary -- Chapter 16: Lesson 1: Overview of Log Shipping
"* Log shipping is an automated way to apply a continuous chain of transaction log backups to one or more standby servers.
Note: Terminology Conventions
Log shipping operates between databases, so a primary server and secondary server do not actually exist. It is entirely possible to log-ship one database from ServerA to ServerB and then log-ship a different database from ServerB back to ServerA. When we refer to a primary or primary server, we are specifically referring to the server running SQL Server that contains the primary database within a particular log shipping session. When we refer to a secondary or secondary server, we are specifically referring to the server running SQL Server that contains the secondary database within a particular log shipping session. When we refer to a standby or standby server, we are specifically referring to the server session. For brevity, we use the terms primary and secondary/standby throughout the remainder of this chapter. Please keep in mind that these terms refer to a particular database, and any other object referenced for either a primary server or a secondary server appies to the instance that is hosting the specific databse.
Caution: Log Shipping Scope
the scope of log shipping is limited to a database. Therefore, log shipping does not take into account any cross-databsae dependencies that might exist. For example, if you have a databse that contains customers and a database that contains orders for those customers, your applications are maintaining the integrity of the data by the way transactions are handled. However, log shipping breaks this integrity because it operates by using transaction log backups, and SQL Server does not allow multidatabase transactions. it is possible that even if you had both customer and orders databases participating in log shipping sessions with backkups and restores operating at the same interval, upon recovery, orders could exist without a corresponding customer or customers could exist without all their orders. As a rule, if you ahve an environment set up that prevents you from using foreign keys to enforce referential intergrity (such as when you place customers and orders in separate databases instead of separate tables within the same database), SQL Server cannot ensure that data willr emain linked across databses.
More Info:
It is possible to implement log shipping between two databases within the same SQL Server instance. However, this is not a recommended practice for production environments, because if you lose the instance, you have lost both your primary and standby databases. A log shipping configuration confined within a single instance is for testing and learning purposes only.
Chapter 16: Lesson 2: initializing Log Shipping
xx
Quick Check: 1. Changing the primary database into which recovery model breaks log shipping?
1. Changing the recovery model of a database to simple breaks the transaction log chain and does not allow transaction log backups to be executed.
Quick Check: 2. On which type of backup is log shipping based?
2. Transaction log backups.
Quick Check: 3. Which operation cannot be performed against the secondary databse to continue to apply transaction log backups?
3. The database cannot be recovered, so you cannot issue RESTORE DATABASE…WITH RECOVERY
D. DDL Triggers"
"1. Correct Answer: B
Exam tip:
SSIS has a task that you can use to transfer logins from one instance to another.
Lesson Summary -- Chapter 16: Lesson 2: initializing Log Shipping
"* A database must be in either the Full or Bulk-logged recovery model to participate in log shipping.
Best Practices
This process ensures that you minimize the time required to initialize log shipping by not having to wait for another full backup to complete. It also ensures that you minimize the disk space consumed.
Best Practices: Configuring Intervals
"The scheduling interval for the backup job and copy job dictates the data loss that you can expect in the event of aprimary server failure. For example, if you configure backups to occur evry 15 minutes, but only copy backups every 45 minutes, you could potentially lose one hour of backups if the primary server fails. This is because the backup could complete just after the copy job finishes, which means that the copy job did not pick up the last 15 minutes of transactions and will not copy them for another 45 minutes, you could potentially lose on hour of backups if the primary server fails. This is because the backup could complete just after the copy job finishes, which means that the copy job did not pick up the last 15 minutes of transactions and will not copy them for another 45 minutes.
Best Practices: Data Loss Exposure
The general rule of thumb for data loss exposure is that it should be twice the interval of the transaction log backups. For example, if transaction log backups are executed every five minutes, the data loss exposure within a log shipping environment is considered to be 10 minutes. This interval accounts for the time it takes to complete a transaction log backup and copy it to the secondary. On system that experience a very high transaction volume, transaction log backups can seem to occur almost continuously because itmight take almost as long to complete a backup as the interval on which the backup is scheduled.
Best Practices: Leveraging SSMS
SSMS provides a single interface for configuring all the components or log shipping. Although the interface can perform all the configuration, most database administrators (DBAs) prefer to use Transact-SQL to set up log shipping. Fortunately, you can use SSMS to construct the log shipping configuration and generate a script for the implementation.
Best Practices: Restoring a full backup during configuration.
It is generally not recommended to have log shipping generate a full backup during the configuration of the session. This can have a very big impact on an existing environment, particularly if you have databses in excess of about 10 GB in size. When you configure log shipping, you will ahve existing backups already. (Note that you should always have backups of your databases.) To initialize log shipping in a typical production environment, you generally follow the steps outlined in this section.
Note: Built-in or Custom-Coded
The only difference between using custom-coded log shipping and using the built-in components that ship with SQL Server involves variances related to business requirements. Custom-coded log shipping generally accounts for additional requirements of a specific business environment. Both methods operate on the same principles: continuously restoring transaction logs to a secondary database. To simplify the explanation of log shipping, we devote the remaining two lessons in this chapter solely to the built-in components that ship with SQL Server 2008.
Note: Remapping users
To remap a databse user with a SQL Server login correctly, you need to execute the ALTER LOGIN command after the secondary database is brought online. Windows logins do not need to be remapped because the SID is globally unique and is always mapped correctly.
Caution: Maintenance Plans
It is very common for maintenance plans to be configured to back up databases. The maintenance plans generally specify that all user databases should be backed up by the plan, so an administrator does not have to worry about reconfiguring the maintenance jobs when databases are added or removed Most maintenance plans that are created will include the database on which you just configured log shipping, which leads to having multiple transaction log backups running against the primary database. The maintenance plan creates on set of transaction log backups, and the log shipping jobs create another set of transaction log backups. Each set is only part of what needs to be restored. The combination of both sets of backups is required to ensure that the database is up to date because each transaction log backup contains only the data that has changed since the last transaction log backup. Before setting up log shipping, you should change any maintenance plans that perform transaction log backups to exclude the primary database. Full and differential backups do not affect log shipping.
Caution: SQL Server Agent
Log shipping relies on jobs being run by the SQL Server Agent service. If the SQL Server Agent service is not running, log shipping is not running. If the jobs for log shipping are disabled or do not run, log shipping is not running.
Important: Filestream
The AdventureWorks database contains a FILESTREAM filegroup. Before restoring the AdventureWorks database to the secondary instance, you must ensure that FILESTREAM has been enabled.
Chapter 16: Lesson 3: Designing Failover and Failback Strategies
xx
Quick Check: 1. Which restore option is used to bring a database online?
1. The WITH RECOVERY options brings a database online, making it accessible to connections.
Quick Check: 2. What are the two reasons that would require you to take an additional outage and failback to the original primary after it is repaired and back online?
"2. The only reasons that you would need to fail back to the original primary are the following:
D. Database Mirroring"
"1. Correct Answer: A
D. Replication"
"2. Correct answer: D
D. Log shipping"
"3. Correct Answer: D
D. Send an event notification"
"4. Correct Answer: C
D. EXECUTE ALTER LOGIN"
"1. Correct Answer: B
D. SQL Server Agent jobs"
"Correct Answer: B
D. Store the backup of your database master key in a different directory from your database backups."
"6. Correct Answers: A and C
D. sp_change_users_login"
"7. Correct Answer: B
E. Primary -- Database Mirroring; secondary -- log shipping"
"8. Correct Answer: D
Lesson Summary -- Chapter 16: Lesson 3: Designing Failover and Failback Strategies
"* Detecting a failure and failing over is a manual process for log shipping.
Best Practices: Failback from a Standby
Whenever possible, you should ensure that performance or fault tolerance does not degrade if an application needs to fail over to the secondary database. This eliminates any technical reason for needing to fail back to the originial primary. If you do not need to fail back to the original primary, you can reverse the direction of log shipping after the failed server is back online. By not faving to fail back, you eliminate an additional outage on the application.
Backup job
A scheduled job in SQL Server Agent that create the transaction log backups for a log shipping
Copy job
A scheduled job in SQL Server Agent that copies transaction log backups from the primary to the secondary within a log shipping session.
Log shipping session
The combination of a primary database and a secondary database whereby a continuous chain of transaction log backups is taken against the primary database and restored to the secondary database.
Monitor instance
The instance that is configured to perform periodic health checks against a log shipping session.
Primary database
The database that is recovered and accepting transactions within a log shipping session.
Primary server
The server hosting the instance of SQL server that is hosting the primary database.
Restore job
A scheduled job in SQL Server Agent that restores transaction log backups to the secondary.
Secondary database
The database that is unrecovered (NO RECOVERY or STANDBY) and ahs transaction log backups from the primary continuously restored to it.
Secondary server
The server hosting the instance of sQL Server that is hosting the secondary database.
Standby database
See secondary database
Standby server
See secondary server
Transaction log chain
A continuous set of transaction log backups that can be applied to a full backup and does not provide any gaps in the transaction log sequence.
Chapter 17: Lesson 1: Overview of Replication
xx
Quick Check: 1. What are the three methods of replication?
1. Snapshot, transactional, and merge.
Quick Check: 2. What are the five agents used in replication?
2. Snapshot Agent (snapshot.exe), Log Reader Agent (logread.exe), Distribution Agent (distrib.exe), Merge Agent (replmerg.exe), and Queue Reader Agent (qrdsvc.exe).
Quick Check: 3. What are the three types of data conflicts?
3. Insert of a duplicate primary key, update conflict, and update of a nonexistent row.
Quick Check: 4. What are the two roles that a database can have within a replication architecture?
4. Publisher or subscriber.
Quick Check: 5. What are the core components of replication?
5. One or more articles are combined into a publication that forms the basis of a replication session. The articles can optionally have filters applied to them.
D. Peer-to-peer replication"
"1. Correct Answer: A
D. Transactional replication with queued updating subscribers."
"2. Correct Answer: D
Exam Tip:
Although replication has the capability to apply filters to articles, this capability is not used in high-availability architectures. A high-availability architecture is mainly concerned with maintaining a complete and coherent copy of the data on a separate instance of SQL Server.
Lesson Summary: Chapter 17: Lesson 1: Overview of Replication
"* One or more articles can be combined into a publication to form the definition of the data set that will be moved by the replicaiton engine.
Note:
Data moving within replication is referred to as the publication or set of data. It is always within the context of the subset of the data that you have defined to be replicated based on the articles selected and any row or column filters applied. This is a unique characteristic of replication, whereby a portion of a database can be made highly available.
Note: Applying a Snapshot
"By sefault, a snapshot applies the table structure, primary key and clsutered index, unique constraints, and data to the subscriber. All other objects related to a table, such as check constraints and foreign key constraints, are not sent. You can override default snapshot behavior by modifying the article properties.
Note: Cycle of an Agent
Replication can be configured to run in either a continuous or a scheduled mode. In a scheduled mode, the replication agent is run on a periodic basis. When configured in a continuous mode, the replication engine is constantly running. In either case, the replication engine always runs in a cycle of determining whether changes exist to be replicated, moving the changes to the subscriber, and then acknowledging receipt of changes. This process is referred to as a cycle of the replicaiton agent. In a schedule mode, the cycle of an agent is more obvious because the agent starts, performs some work, and then shuts down. The cycle is less obvious in continuous mode because the agent never shuts down, but instead it starts another cycle as soon as the previous one is completed. The cycle of an agent is a very important concept for understanding a variety of scenarios that can occur -- the most important of which are data conflicts.
Note: Learning Replication
The practices in this chapter configure a replication environment using the settings that I have always recommend when learning replication. You configure the publisher, subscriber and distributor all within the same instance, which has the effect of replicating between two databases on the same instance. This is the most straightforward way to learn replication because it eliminates the two most common causes of configuration issues in replication: Security and Connectivity.
Note: Snapshot Folder
The snapshot folder is a directory location that you specify when you configure replication, which serves as the default location for a snapshot. When you create a publication, you can override the location of the snapshot folder for the specific publication.
Note: Service Accounts
The replication engine uses named accounts for security. Therefore, the service account that SQL Server and SQL Server agent are running under must be either local or domain accounts. You cannot use localsystem for the service account.
Caution: Minimally Logged Transactions
"If a database is participating in replication, you must be extremely careful with Bulk-Logged and Simple recovery models. When a database is placed in the Bulk-logged or Simple recovery model, minimally logged transactions can be executed. These types of transactions only log page allocations and deallocations into the transaction log; they do not fire triggers.
Chapter 17: Lesson 2: Transactional Replication
xx
Quick Check: 1. Name two options for performing transactional replication.
1. Immediate updating subscribers and queued updating subscribers.
Quick Check: 2. name two architectures for performing transactional replication.
2. Peer-to-peer and bidirectional.
Quick Check: 3. What special transaction is now available for transactional replication that provides timing statistics?
3. Tracer tokens
Quick Check: 4. What are the two procedures that are used to validate whether a publisher and subscriber are synchronized?
4. sp_publication_validation and sp_article_validation
D. Transactional replication in a queued updating subscriber configuration"
"1. Correct Answer: C
Exam Tip:
You can configure a transactional publication with both immediate updating subscriber and queued updatting subscriber options. The queued updating subscriber option can be used as a failover mechanism when the publisher is not available.
Lesson Summary -- Chapter 17: Lesson 2: Transactional Replication
"* Transactional replication uses the Log Reader Agent to move transactions from the transaction log on the publisher to the distribution database. The Distribution Agent then moves transactions from the distribution database to each subscriber.
Best Practices:
Validation can cause a lot of overhead, particularly in environments with large numbers of subscribers to a single publication or with publications that contain a large number of articles. It is most common to perform a row count-only validation on a daily basis while reserving the row count and binary checksum validation for a weekly execution.
Note: Cleaing up the distribution database
the Distribution Agent does not directly clean up entries in the distribution database that have been written successfully to all subscribers. A separate job (referred to as the cleanup agent) runs on a periodic basis to remove transactions that have been sent to all subscribers. This job is separated for performance purposes and enables the Distribution Agent to sent transactions to subscribers at a much higher rate than it takes for the Log Reader Agent to write transactions to the distribution database. This processing architecture ensures that the Distribution Agent does not become a bottleneck int he process, even when it must handle sending transactions to many subscribers.
Note: UDFs
You include the UDFs within the publicaiton because there are tables within the AdventureWorks database that have constraints defined with UDF. If they are not either created already on the scriber or added to the publication, a failure occurs when the snapshot is applied.
Caution: Agent Security Settings
it is not recommended to run the replication agents under the SQL Server Agent service account. The replication agent accounts should have minimal permissions in your environment, and the SQL Server Agent servcie account elevates those permissions. SQL Server Books Online has several very good articles about setting replication security. You are explicitly violating the security best practices in the very specific examples in this chapter, and you're doing this for a very good reason. This chapter is about learning the basics of replication and how it can fit into your environment. The purpose of this chapter is to become familiar with the replication engine before hyou take the step of deploying it across multiple instances with varying security requirements and locking down all the access. If you do not understand the basic operation of the engine before applying all the security to the agents, you will have a hard time understanding whether any errors you encounter are because of a situation that you introduced, a misconfiguration, or security blocking access. You should not run replicaiton agents with this configuration in a production environment.
Chapter 17: Lesson 3: Merge Replication
xx
Quick Check: 1. Which two tables allow merge replication to determine the changes that need to be synchronized between publisher and subscriber?
1. Msmerge_contents and Msmerge_genhistory
Quick Check: 2. What feature makes merge replication more resilient to outages?
2. The merge engine uses the metadata to determine the changes that need to be applied. Because both publisher and subscriber maintain a full history of all changes and the metadata is stored within the same database participating in replicaiton, backup / restore processes keep the metadata syunchronized with the data being replicated. This ensures that the merge engine can recover, even from a restore operation, and incrementally resynchronize itself.
(question of 543)
XX
Lesson Summary -- Chapter 17: Lesson 3: Merge Replication
"* The merge engine provides a very powerful and resilient engine for distributing changes because it was designed because it was designed for mobile, disconnected users.
Agent profile
A set of configuration options that are applied to a replication agent.
Article
The most granular unit of a publication. Articles can be defined for tables, views, stored procedures and functions.
Column filter
Specifies filtering criteria to be applied to columns in a table or view that are replicated. This enables only a subset of columns to be replicated from publisher to subscriber.
Conflict resolver
Predefined code that is employed with merge replication or transactional replication with the queued updating subscribers option to ensure that a single coherent piece of data is maintained throughout the replication architecture if changes to the same row of data are applied at multiple locations within a replication cycle.
Data conflict
A situation that is created because of the replication engine moving changes on an asynchronous basis and different users making changes to the same row of data on both the publisher and subscriber.
Distribution Agent
The process responsible for applying snapshots for snapshot and transactional replication, as well as moving incremental changes to subscribers participating in transactional replication.
Distributor
The instance of SQL Server that contains the distribution database. Also the location where the majority of the processing overhead in a replication architecture occurs.
Dynamic filter
A row filter that is evaluated based on parameters passed from each subscriber.
Generation
A batch identifier used in merge relication to identify batches of changes on both publishers and subscribers.
Join filter
A row filter that is based on join criteria between two articles within a publication.
Log reader Agent
The executable responsible for polling the transaction log on the publisher for changes and writing any changes to the distribution database.
Merge Agent
The executable responsible for packaging changes to be exchanged between publisher and subscriber, as well as detecting and resolving any conflicts.
Merge replication
A method of replication that is based on tracking changes in a set of metadata tables internal to both publisher and subscriber. Designed primarily for mobile, disconnected applications that exchange changes in both directions.
Publication
The unit of exchange between publisher and subscriber that contains one or more articles.
Publisher
The designated master within a replication architecture. The publisher is considered to contain the master copy of the data.
Queue Reader Agent
The executable responsible for flushing changes made on a subscriber back to the publisher and resolving any conflicts. Used only with the queued updating subscriber option.
Reinitialization
The process by which a snapshot is regenerated and reapplied to a subscriber; it wipes out any data on the subscriber and resynchronizes it to the publisher.
Replication topology
A business diagram that shows the way data flows in a replication architecture.
Replication watermark
The last LSN that was read by the Log Reader Agent.
Republisher
A subscriber that is also configured as a publisher.
Row filters
The criteria used to restrict the set of rows that are replicated for an article.
Snapshot Agent
The executable responsible for generating a snapshot.
Snapshot folder
A directory used to store all the data generated by the snapshot process.
Snapshot replication
A method of replication that sends a full copy of the data defined within a publication from poublisher to subscriber.
Subscriber
A database designated to receive changes from a publisher. Can be configured to enable changes to be sent back to a publisher, if desired.
Subscription
A contract between publisher and subscriber that defines the publication that will be sent to a subscriber.
Tracer token
A specialized transaction that is used to log the latency between each component of transactional replication.
Transactional replication
A method of replication that is used to send data in one direction, from a publisher to a subscriber. This method propagates incremental changes applied to the publisher.
Validation
A process that is used to determine whether the publisher and subscriber are synchronized.