Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
498 Cards in this Set
- Front
- Back
Specialized tasks included in SSIS part1
(4) |
Copy SQL Server objects
Manage DB backups Re-indexing Extendibility with VSTA |
|
Precedence constraints in SSIS
|
Allows to configure operational workflows based on a component success, failure or other execution
|
|
Package configurations in SSIS
|
Enables developers to expose internal properties of a package such that the properties can be modified
|
|
Specialized tasks included in SSIS part1
(4) |
Workflow tasks
Fuzzy lookups Fuzzy grouping Combinif data flows Remap characters Pivot/unpivot data sets |
|
Fuzzy search/lookup
(simple) |
A type of search that will find matches even when users misspell words or enter in only partial words for the search.
|
|
Fuzzy Lookup
(advanced) |
Enables you to match input records with clean, standardized records in a reference table. The matching process is resilient to errors that are present in the input records. Fuzzy Lookup returns the closest match and indicates the quality of the match.
|
|
Fuzzy Grouping
|
Enables you to identify groups of records in a table where each record in the group potentially corresponds to the same real-world entity. The grouping is resilient to commonly observed errors in real data, because records in each group may not be identical to each other but are very similar to each other. For example, Fuzzy Grouping is useful for grouping together all records in a Customers reference table that describe a single real customer.
|
|
SSRS
|
SQL Server Reporting Service
Platform for designing reports and distributing data |
|
SSRS components
(2) |
Report server
Report designer |
|
SSAS
|
SQL Server Analysis Service
Create to fill the gap between the data needs of business and the ability if IT to provide data. |
|
SSAS components
(2) |
Online Analytical Processing
Data Mining |
|
OLAP
|
Online Analytical Processing
|
|
OLAP engine use
|
Allows to deploy, query, and manage cubes that have been designed in Business Intelligence Development Studio.
|
|
BIDS
|
Business Intelligence Development Studio
|
|
Data Mining engine use
|
Allows users to find patterns and make predictions. It could be used examine data trends over time.
|
|
SQL Workgroup is for
|
Small departmental projects with a limited set of features.
|
|
Express is for
|
Freely distro of SQL for embedded apps., basic storage needs.
|
|
Compact
|
Designed as an embedded DB.
|
|
Standard CPU, DB size, RAM
|
CPU, DB size, RAM
4, Un, Un |
|
Workgroup CPU, DB size, RAM
|
CPU, DB size, RAM
4, Un, Un |
|
Express CPU, DB size, RAM
|
CPU, DB size, RAM
1, 4, 1 |
|
Compact CPU, DB size, RAM
|
CPU, DB size, RAM
1, 4, 1 |
|
SQL Server Management Studion(DB engine support)
|
Standard:yes
Workgroup:yes Express:sep.down. Compact:no |
|
Full Text Search(DB engine support)
|
Standard:yes
Workgroup:yes Express:adv.serv.only Compact:no |
|
Partioning(DB engine support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Parallel Operations(DB engine support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Multiple Instances(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
DB snapshots(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Scalable Shared DB(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Indexed Views(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Log Compression(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Clustering(SSIS support)
|
Standard:2 nodes
Workgroup:no Express:no Compact:no |
|
DB Mirroring(SSIS support)
|
Standard:single-thread
Workgroup:no Express:no Compact:no |
|
Online Operations(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Resource Governor(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Backup Compression(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Hot Swap Memory/CPU(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Data Encryption(SSIS support)
|
Standard:Limited
Workgroup:Limited Express:Limited Compact:Pass.based only |
|
Change Data Capture(SSIS support)
|
Standard:no
Workgroup:no Express:no Compact:no |
|
Policy Based Management(SSIS support)
|
Standard:yes
Workgroup:no Express:no Compact:no |
|
Performance data collection(SSIS support)
|
Standard:Yes
Workgroup:no Express:no Compact:no |
|
CLR(SSIS support)
|
Standard:Yes
|
|
XML support(SSIS support)
|
Standard:Native
Workgroup: Native Express:Native Compact:Stored as text |
|
Spatial data(SSIS support)
|
Standard: Yes
|
|
Stored procedure, triggers and views(SSIS support)
|
Standard:Yes
Workgroup:Yes Express:Yes Compact:no |
|
Merge Replication(SSIS support)
|
Standard:Yes
Workgroup:Yes Express:Subscriber only Compact:Subscribe only |
|
Transactional Replication(SSIS support)
|
Standard:Yes
Workgroup:Subscriber only Express:Subscriber only Compact:No |
|
import/Export Wizard(SSIS support)
|
Standard:Yes
Workgroup:Yes Express:N/A Compact:N/A |
|
Package Designer(SSIS support)
|
Standard:Yes
Workgroup:Yes Express:N/A Compact:N/A |
|
Data Mining(SSIS support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
Fuzzy grouping/lookup(SSIS support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
Term extraction/lookup(SSIS support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
OLAP processing(SSIS support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
What services do you need dedicated accounts for
|
Database Engine
SQL Server Engine |
|
Possible authentication services in SQL
|
Windows Only
Windows and SQL Server |
|
What is the maximum amount of SQL instances on single machine
|
50 = 1 default + 49 named
supported only in SQL enterprise |
|
Enumeration request
|
Application can broadcast a special command request across a network to locate any servers running SQL server. By disabling enumeration responses by setting the Hide Instances to Yes you will prevent discovery of you SQL server preventing a possible attack
|
|
Which command must be execute against the SQL Server instance before FILESTREAM data can be stored
|
Sp_configure
EXEC sp_configure 'filestream_access_level', 2; |
|
Profiles and Database Mail
|
Profiles form the core element within Database Mail. Given profile can have multiple email accounts
|
|
Possible profile configurations
(2) |
Public - can be accessed by any user with the ability to send mail.
Private - cab be accessed only by those users who have been granted access to the mail profile explicitly. |
|
What happens to Database Mail if SQL Server Agent is not running
|
Messages will accumulate in a queue within the MSDB database.
|
|
Query to enable Database Mail feature
|
EXEC sp_configure 'Database Mail XPs', 1
GO RECONFIGURE WITH OVERRIDE GO |
|
What service database mail utilizes
|
SQL Server Agent is used to send messages as a background process.
|
|
What happens if SQL Serve Agent is not running
|
Messages accumulate in queue within the Msdb database.
|
|
How many mail accounts can a mail profile contain
|
One and more
|
|
What are notification Services
|
A feature that was available in SQL 2005 that is not longer available in 2008
|
|
What is Activity Monitor
|
Is a feature of SSMS that displays query activity on the server
|
|
Achieving optimal performance disk I/O performance
|
Separate the transaction logs from data files so that you can isolate disk I/O
|
|
What happens to data prior to being written to the data files of a DB
|
Every transaction is written to a transaction log
|
|
Types of files used by SQL Server(2)
|
Data and transaction logs
|
|
What are data files for?
|
Data files are responsible for the long-term storage of all the data within a DB
|
|
What are transaction log files for?
|
Responsible for storing all the transactions that are executed against a DB
|
|
Explanation of a filegroup
|
Instead of defining the storage of objects directly to a data file, SQL Server provides and abstraction layer for more flexibility called a filegroup.
Filegroups are a logical structure defined within a DB, that map a DB and the objects contained within a DB to the data files on disk. |
|
Can a filegroup contain more than one file?
|
Yes
|
|
What is ON clause
|
Allows you to specify when you create an object the file group where SQL Server will store the object
|
|
What can you specify for a file when is added to a DB
|
initial size
max size auto growth in percents or MB |
|
What algorithm SQL Server uses when writing to a file
|
Proportional fill algorithm. This algorithm is designed to ensure that all files within a filegroup reach the maximum defined capacity at the same time.
|
|
.mdf extension(3)
|
1) First data file created within a DB
2) Associated with the primary filegroup 3) Primary data file which contains all the system objects necessary to a database |
|
.ndf extension
|
Used for all other data files underneath a DB regardless of the filegroup to which the file is associated.
|
|
.ldf extension
|
Used for the transaction logs.
|
|
Describe the check pointing process
|
All data manipulation within SQL occurs in memory within a set of buffers. Data is first written to a memory buffer, then written to the transaction log, and finally persisted to a data file via a background process called check pointing.
|
|
Does data file design and I/O speed affects performance
|
Yes, faster they are the better the performance is.
|
|
How SQL Server communicates with the files underneath a DB
|
SQL Server creates a separate thread for each file underneath a Database.
|
|
How many files and filegroups should you have when creating a DB?
|
You should have a file with an .mdf extension within a filegroup named PRIMARY, a file with an .ndf extension in a filegroup with any name that you chose, and the transaction log with and ldf. extension
|
|
DEFAULT property for a filegroup
|
Used to define the filegroup where SQL Server places objects if you do not specify the ON clause during object creation.
|
|
Why would you change the default filegroup?
|
After you create a DB you should mark the second filegroup as the default filegroup. By changing the default filegroup, you ensure that any object you create are not accidentally placed on the primary filegroup
|
|
Command for changing the default filegroup
|
ALTER DATABASE <db name. MODIFY FILEGROUP <fg name> DEFAULT
|
|
Main reason not to place any of your objects on the primary filegroup is?
|
To provide as much isolation in the I/O as possible
|
|
Reasons to use secondary filegroup vs. primary filegroup
|
1) To provide as much isolation in the I/O as possible.
2) The data in the system object doesn't change as often as data in your object. 3) By minimizing the write activity to the primary data file you reduce the possibility of introducing corruption due to hardware failures. 4) Increase the availability of the DB by minimizing the changes made to the primary filegroup. |
|
What determines the state of the DB
|
The state of the primary filegroup determines the state of the database. You can increase the availability of the database by minimizing the changes made to the primary filegroup.
|
|
Transaction Logs - process explanation
|
When SQL Server acknowledges that a transaction has been committed, SQL Server must ensure that the change is hardened to persistent storage. Although all writes occur through memory buffers, persistence is guaranteed by requiring that all changes are written to the transaction log prior to a commit being issued. In addition, the writes to the transaction log must occur directly to disk.
|
|
How SQL Server writes to the transaction log
|
Sequentially
|
|
When SQL Server reads from the log
|
Only during restart recoveries.
|
|
Why would you isolate the transaction log to a dedicated disk
|
SQL Server randomly reads and writes to the data files underneath a DB, by isolating the transaction log to a dedicated set of disks you ensure that the disk heads performance is optimal.
|
|
What FILESTREAM allows you
|
To associate files with a database. The files are stored in a folder on the OS, but are linked directly into a Database where the files can be backed up, restored, full-text-indexed, and combined with other structured data.
|
|
tempdb database usage
|
Temporary objects
Works tables used in grouping/storing Work tables to support cursors Version store supporting snapshot isolation level Overflow for table variables |
|
Why should you move tempdb?
|
Due to potential heavy traffic tempdb should be moved to a set of disks separated from DBs and backup files.
|
|
Multiple tempdb
|
Common practice for tempdb is to create one file per processor. The one file per processor is with respect to what SQL Server would consider a processor and not the physical processor, which could have multiple cores as well as hyper-threading.
|
|
DB Recovery options
|
Determine the behavior of the transaction log and how damaged pages are handled.
|
|
Recover model(explanation)
|
Every DB within a SQL Server instance has a property setting called the recovery model. Define the type of backups you can perform against a DB.
|
|
Recover models available in SQL 2008 are:
(3) |
Full
Bulk-logged Simple |
|
Full Recovery Mode
|
All changes made using both DML and DDL are logged to the transaction log. It is possible to recover a DB in the Full recovery model to a give point in time so that data loss can be minimized.
|
|
Best practice recovery model
|
Every production DB that accepts transactions should be set to the Full recovery model.
|
|
Bulk-logged recovery model
|
Allows certain operations to be executed with minimal logged. When a minimally logged operation is performed, SQL Server does not log every row changed but only the extents, reducing over head and improving performance.
|
|
Operations that are performed in minimally logged manner with the DB set in bulk-logged recovery mode are:
(5) |
BCP
BULK INSERT SELECT..INTO CREATE INDEX ALTER INDEX .. REBUILD |
|
Bulk-logged recovery model note for recovery procedure
|
Because bulk-logged recovery model doesn't log every change to the transaction log, a DB cannot be recovered to a point in time within the interval that a minimally logged transactions executed.
|
|
Simple recovery model
|
A DB in Simple recovery model logs operations to the transaction log exactly as the Full recovery model does. However, each time the DB checkpoint process executes, the committed portion of the transaction log is discarded.
|
|
Cmd for setting DB recovery model
|
ALTER DATABASE database_name SET RECOVERY { FULL / BULK_LOGGED / SIMPLE }
|
|
How pages get damaged
|
It is possible to damage data pages during a write to a disk if you have a power failure or failures in disk subsystem components during the write operation. Because the damage happens to a page on disk the only time that you see a result of the damage is when SQL Server attempts to read the page off disk.
|
|
What command allows you to discover and log damaged pages
|
PAGE_VERIFY CHECKSUM
|
|
Do SQL by default checks for damaged pages
|
No, you need to turn PAGE_VERIFY CHECKSUM to have SQL check for damaged pages.
|
|
What can happen to an SQL Server a DB if a damaged pages is found
|
The default Configuration of SQL Server does not check for damaged pages and could cause the DB to go off-line if a damaged page is encountered.
|
|
When is error 824 returned?
|
What error is returned to the calling application and logged to SQL Server error log and Windows Application Event Log and the ID of the damaged page is logged to the suspect_pages table in the msdb DB.
|
|
SQL Server 2008 introduces FILESTREAM why?
|
Although the volume of data within organization has been exploding, leading the way in this data explosion is unstructured data. To tackle the problem of storing, managing, and combining the large volumes of unstructured DB with the structured data in you DB.
|
|
What FILESTREAM allows you to do?
|
This feature allows you to associate files with a DB. The files are stored in a folder on the operating system, but are linked directly into a DB where the files can be backed up, restored, gull-text-indexed, and combined with other structured data.
|
|
What AUTO OPTIONS do
|
Database that enables certain actions to occur automatically
|
|
List AUTO OPTIONS
(5) |
AUTO_CLOSE
AUTO_SHRINK AUTO_CREATE_STATISTICS AUTO_UPDATE_STATISTICS AUTO_UPDATE_STATISTICS_ASYNCH |
|
If AUTO_CLOSE is enabled
|
When the last connection to a DB is closed, SQL Server shuts down the DB and releases all resources related to the DB. When a new connection is made to the DB, SQL starts up the DB and allocates resources.
|
|
What is the default state for AUTO_CLOSE
|
Disable. Never set it to a DB that is frequently accessed as it is going to cause severe degradation in performance.
|
|
AUTO_SHRINK explanation
|
SQL Server periodically checks the space utilization of data and transaction log files. If the space checking algorithm finds a data file that has more than 25% free space, the file automatically shrinks to reclaim disk space.
|
|
Best practices for AUTO_SHRINK
|
Disabled and manually shrink files only when needed.
|
|
AUTO_CREATE_STATISTICS
|
SQL Server automatically create statistics that are missing during the optimization phase of query processing. Although this creates some overhead, the benefits worth it.
|
|
AUTO_UPDATES_STATISTICS
|
Updates out-of-date statistics during query optimization.
|
|
AUTO_UPDATE_STATISTICS_ASYNC
|
Controls whether statistics are updated during query optimization or if query optimization continues while the statistics are updated asynchronously.
|
|
CHANGE_TRACKING
|
SQL Server 2008. Lightweight mechanism that associates a version with each row in a table that has been enabled for change tracking, Each time the row is changed, the version number is incremented. Applications need only to compare the row version to determinate if a change has occurred.
|
|
CHANGE_RETENTION
|
After change tracking has been enabled for the DB, you can choose which tables within a DB that change tracking info should be captured. CHANGE_RETENTION options allows you to specify how long the tracking info should be kept.
|
|
AUTO_CLEANUP option
|
Used by CHANGE_RETENTION option to automatically clean up tracking info.
|
|
Access to a DB is controlled with this options
|
ONLINE
OFFLINE EMERGENCY |
|
DB is in ONLINE STATE
|
You can perform all operations that that would otherwise be possible.
|
|
DB is in OFFLINE state
|
The DB is inaccessible.
|
|
DB is in EMERGENCY state
|
State can be accessed only by a member of the db_owner role, and the only command allowed to be executed is SELECT.
|
|
DB READ_ONLY mode
|
The DB cannot be written to also SQL Server removes any transaction log file that is specified for the DB.
|
|
DB mode is set to READ_WRITE mode
|
Causes SQL Server to recreate the transaction log file that has been deleted when the DB was in READ_ONLY.
|
|
How can you fix a damaged page in SQL 2005?
|
Only way to fix a damaged page was to execute a page restore.
|
|
What happens to a damaged data page is SQL Serve is part of mirroring session?
|
SQL Server2008 automatically replaces the page with a copy of the page from the mirror. An entry is logged and can be reviewed with the sys.dm_db_mirroring_auto_page repair view
|
|
sys.dm_db_mirroring_auto_page repair view
|
Allows you to review all logs generated from automatically recovering damaged pages
|
|
User access options to access DB
|
SINGLE_USER
RESTRICTED_USER MULTI_USER |
|
SINGLE_USER mode
|
Only a single user is allowed to access the DB.
|
|
RESTRICTED_USER mode
|
Only allow access to members of the db_owner, dbcreator, and sysadmin roles.
|
|
What happens to a DB when multiple user try to access it but the DB is in mode either SINGLE_USER or RESTRICTED_USER
|
The ALTER DATABASE cmd is locked until all the non-allowed users disconnect.
|
|
ROLLBACK action
|
Instead of waiting for all users to complete transactions, along with disconnect from the DB you can specify a ROLLBACK action to terminate the connections by force.
|
|
What happens to values when a DB call is parameterized
|
The values are passed as variables. Applications gain a significant benefit when DB calls are parameterized.
|
|
What SQL Serve does with every query's plan when the query is executed
|
SQL caches the query plan. When a query is executed, SQL Server parses and compiles the query. Then the query is compared to the query cache. If match is found, SQL Server retrieves the plan that has already been generated and executes the query.
|
|
PARAMETERIZATION FORCED database option
|
Forces SQL Server to parameterize every query for a given DB.
|
|
How do you restrict DB access to member of the db_owner role and terminate all active transactions and connection at the same time?
|
You would execute the following command: ALTER DATABASE <DB name> SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE
|
|
What size are the data pages in SQL
|
8 Kb divided in 1+ blocks of 512 bytes
|
|
PAGE_VERIFY can be set to:
|
TORN_PAGE_DETECTION
CHECKSUM |
|
PAGE_VERIFY TORN_PAGE_DETECTION
|
Exists only for backward compatibility should not be used.
|
|
PAGE_VERIFY CHECKSUM
|
SQL Server calculates a checksum for the page prior to the write. Each time a page is read off disk, a checksum is recalculated and compared to the checksum write to the page. If the checksum do not match, the page has been corrupted.
|
|
Best practice for PAGE VERIFY CHECKSUM
|
Should be enabled on every production DB.
|
|
DBCC CHECKDB
|
You can force SQL Server to read every page from the disk and check the integrity by executing the DBCC CHECKDB cmd
|
|
DBCC CHECKDB syntax
|
DBCC CHECKDB
[( 'database_name' / database_ID / 0 [ , NOINDEX / { REPAIR_ALLOW_DATA_LOSS / REPAIR_FAST / REPAIR_REBUILD } ] )] [ WITH { [ ALL_ERRORMSGS ] [ , [ NO_INFOMSGS ] ] [ , [ TABLOCK ] ] [ , [ ESTIMATEONLY ] ] [ , [ PHYSICAL_ONLY ] ] } ] |
|
What SQL Server does in responses to DBCC CHECKDB
(6) |
1) Checks page allocation within the db.
2) Checks the structural integrity of all tables and indexed views. 3) Calculates a checksum for every data and index page to compare against the stored checksum. 4) Validates the contents of every indexed view. 5) Checks the DB catalog. 6) Validates Service Broker data within the DB. |
|
What cmds. does DBCC CHECKDB to achieve its tasks
(3) |
1) DBCC CHECKALLOC, to check the page allocation of the DB.
2) DBCC CHECKCATALOG, to check the DB catalog. 3) DBCC CHECKTABLE, for each table and view in the DB to check the structural integrity. |
|
schemas explanation
|
In addition to being a security structure, a schema provides a convenient mechanism to group objects together within a DB.
|
|
syntax to create schema
|
CREATE SCHEMA <schema name> AUTHORIZATION <owner name>
|
|
Numeric data type - 9
|
TINYINT
SMALLINT INT BIGINT DECIMAL(P,S) NUMERIC(P,S) SMALLMONEY MONEY REAL FLOAT |
|
Decimal data type precision and scale explanation
|
Decimal data type have two parameters - precision and scale. Precision indicates the total number of digits that can be stored both to the left and to the right of the decimal. Scale indicates the maximum number of digits to the rights of the decimal point.
|
|
Decimal and Numeric Data Type Storage cost list - 4
|
1 to 9 - 5 bytes
10 to 19 - 9 bytes 20 to 28 - 13 bytes 29 to 38 - 17 bytes |
|
Character data type
|
CHAR(n) - 1 byte per char max 8000 non-unicode
VARCHAR(n) - 1 byte per char max 8000 non-unicode NCHAR(n) - 2 bytes per char max 4000 unicode NVARCHAR(n) - 2 bytes per char max 4000 unicode |
|
Date and Time data types
|
SMALLDATETIME 01/01/1900 to 06/06/2079 accuracy 1 min 4 bytes
DATETIME 01/01/1753 to 12/31/9999 0.0033 8 bytes DATETIME2 01/01/0001 to 12/31/9999 100 nanons 6-8 bytes DATETIMEOFFSET 01/01/0001 to 12/31/9999 100 nanos 8-10 bytes DATE 01/01/0001 to 12/31/9999 1 day 3 bytes TIME 00:00:00 to 23:59:59 100 nanos 3-5 byte |
|
Binary Data
|
BIT Null,0,1 1Bit
BINARY Fixed-length binary data Up to 8000 bytes VARBINARY Variable-length binary data Up to 8000 bytes |
|
XML Data Type
|
Limited to maximum of 2 GB
Has a maximum of 128 levels within a document |
|
Schema collection
|
XML schemas are stored within SQL Server in a structure called a schema collection. When a schema is applied to an XML column, the only documents allowed to be stored within the XML column must first validate to the associated XML schema collection
|
|
spatial data types
|
SQL Server 2008 supports two data types to store spatial data:
GEOMETRY and GEOGRAPHY |
|
Using what are the two data types supported by SQL 2008 and in what version of SQL they were introduced
|
Both spatial data types are implemented using CLR and were introduced in SQL Server 2005
|
|
CLR
|
Common language runtime
|
|
GEOMETRY column can contain
(7) |
1) Point
2) LineString 3) Polygon 4) GeometryCollcection 5) MultiPolygon 6) MultiLineString 7) MultiPoint |
|
HIERARCHYID data type
|
Used to organize hierarchical data, such as organization charts, bills of materials, and flow charts. Stores position within a tree hierarchy.
|
|
Column properties list
|
1) NULLABILITY
2) COLLATE 3) IDENTITY 4) ROWGUIDCOL 5) FILESTREAM 6) NOT FOR REPLICATION 7) SPARSE |
|
Nullability
|
You can specify whether a column allows null by specifying NULL or NOT NULL for the column properties. If you don't specify a nullability option, SQL Server uses the default option when creating a table.
|
|
Command to set the default nullability option
|
ANSI_NULL_DEFAULT - the default option is not guaranteed to be the same for each DB you can modify it with the command.
|
|
COLLATE
|
You can set the COLLATE property of a DB to override the instance collation sequence.
|
|
What option would you specify on a column to control the collation behavior
|
COLLATE works only of a character-based column
|
|
IDENTITY
|
Identities are used to provide a value for a column automatically when data is inserted.
1) A column with identity property cannot be updated. 2)Float and real cannot be used as indent. 3)only one identity column per table |
|
IDENTITY restrictions
|
1) A column with identity property cannot be updated.
2) Float and real cannot be used as indent. 3) Only one identity column per table |
|
How can you insert an explicit value in an indent column?
|
SET IDENTITY_INSERT <table name>
|
|
How can you change the next value generated in IDENTITY column
|
By using the DBCC CHECKIDENT command
|
|
DBCC CHECKIDENT
|
How can you change the next value generated in IDENTITY column
|
|
ROWGUIDCOL
|
Used mainly by merge replication to designate a column that is used to identify rows uniquely across a DB. Ensures that only a single column of this type exist and that the column has a UNIQUEIDENTIFIER value.
|
|
FILESTREAM
|
BLOBs stored in FILESTREAM are controlled and maintained by SQL Server, but the file resides on the OS.
|
|
BLOBs
|
Binary large object
|
|
Benefits of using FILESTREAM and BLOBs
|
1)By storing the data on the file system outside of the DB we are not limited to 2GB.
2)When you back up all files are backed up at the same time |
|
To what kind of columns can you apply the FILESTREAM
|
Columns with a VARBINARY(MAX) data type. The column within the table maintain a 16-byte identifier for the file.
|
|
In what state is a FILEGROUP designated for FILESTREAM
|
1) Storage is off-line
2) Inaccessible within a DB snapshot 3) DB Mirroring cannot be implemented against a DB containing data stored with FILESTREAM |
|
NOT FOR REPLICATION
|
Used for column defined with IDENTITY property. When you def. an identity, we specify:
1) Value 2) Seed 3) Increment If you explicitly insert a value in the column SQL reseeds the column. If the table is participating in replication, you don't want to reseed the identity column each time data is synch. By applying NOT FOR REPLICATION, SQL doesn't reseed the identity column when the replication engine is applying changes. |
|
SPARSE requirement
|
To designate a column as SPARSE the column must allow NULLs.
|
|
SPARSE
|
When a NULL is stored in a column designated as SPARSE no storage space is consumed. However, non-NULL values require 4 bytes extra for storage
|
|
You cannot apply SPARSE to
|
1) Columns with ROWGUIDCOL
2) Columns with IDENTITY 3) TEXT, NTEXT, IMAGE, TIMESTAMP, GEOMETRY, GEOGRAPHY 4) User defined data types 5) VARBINARY(MAX) with FILESTREAM 6) Computed column 7) Column within an ALTER TABLE statement |
|
Limitation for using SPARSE
|
If the max size of row is over 4009 bytes ALTER TABLE cannot be issued. During ALTER TABLE each row is recomputed by writing a second copy of the row on the same data page. When row is copied if it exceeds 4009 this will cause the data page to exceed 8018 bytes
|
|
Computed Columns
|
Allows to add to a table columns that, instead of being populated with data, are calculated based on other columns in a row.
|
|
PERSISTED property for Computed Columns
|
When you create a computed column, only the definition of the calculation is stored. When you use the computed column within any DML statement, the value is calculated at runtime. Using PERSISTED causes SQL to store the result of the calculation in the row and update the value anytime data participating in the calculation is changed
|
|
Row and Page Compression
|
SQL Server 2008 allows to compress rows and pages for tables that do not have a SPARSE columns, as well as for indexes and indexed views
|
|
Heap
|
Table without a clustered index
|
|
What happens when apply compression to a heap?
|
Compresses only the pages that currently exist in the table
|
|
When data added to the heap is compressed?
|
Only if BULK INSERT or INSERT INTO...WITH(TABLOCK) statements are used
|
|
What you need to execute to compress newly added pages?
|
ALTER TABLE...REBUILD statement with the PAGE compression option
|
|
Does compression setting for a table pass to any nonclustered indexes or indexed views created against a table
|
Does not, you need to specify for each nonclustered index or indexed view
|
|
What data types store data outside of the table?
|
1) VARCHAR(MAX)
2) NVARCHAR(MAX) 3) VARBINARY(MAX) |
|
Is data stored outside of the row in a table compressed?
|
No
|
|
Columns in standard SQL Server table
|
1024
|
|
Using column set definition in conjunction with the new sparse column capabilities how many columns can you have in a DB
|
Up to 30000
|
|
Wide tables
|
Tables that exceed 1024 columns by using a column set definition, but the data stored in any row cannot exceed 8019 unless we have:
1) VARCHAR(MAX) 2) NVARCHAR(MAX) 3) VARBINARY(MAX) |
|
Temporary tables are stored in ______ and can be either _____ or ______
|
Temporary tables are stored in TEMPDB and can be either LOCAL or GLOBAL.
|
|
Local temporary table
|
Designated by name starting with # and visible only within the connection that created it.
|
|
Global temporary table
|
Name begins with ## and is visible across all connections.
|
|
Table variables
|
Created to pass sets of data within stored procedures and functions. Stored in the memory it is visible only within the connection that declared it.
|
|
With what cmd can table variables be populated
|
1) Insert
2) Update 3) Delete 4) Merge 5) Participate in JOIN |
|
primary keys
|
1 primary key constraint per table. Defines the column(s) that uniquely identify every row in table. You specify all columns within the primary key as NOT NULL.
|
|
Clustered primary key
|
The default SQL Server behavior, causes SQL Server to store the table in sorted order according to the primary key.
|
|
Default option for primary key is
|
Clustered. When a clustered primary key is created on a table that is compressed, the compression option is applied to the primary key when the table is rebuilt.
|
|
Foreign key
|
Implemented to provide referential integrity between tables within DBs. If you have a multicolumn primary key, all the columns from the parent primary key must exist in the child table to define a foreign key.
|
|
CASCADE option for a foreign key
|
You can configure a foreign key such that modification of the parent are cascaded to the child table. Cascading is a bad idea.
|
|
Unique constraints
|
Define a column or columns for which the values must be unique within the table. Duplicate entries are not allowed.
|
|
Dif. between unique constrains and primary keys
|
Unique constraint allows NULL.
Primary key doesn't allow NULL. |
|
Default constrains
|
Specify a value that is written to the column of the application does not supply a value. Can define for NULL or NOT NULL columns.
|
|
Default constrains apply only for rows added with
|
INSERT
BCP BULK INSERT |
|
Check constrains
|
Limit the range of values within a column. Can be column level and table level.
|
|
Column level check constrains
|
Are not allowed to reference any other column in the table.
|
|
Table-level check constrains
|
Can reference any column within a table, but they are not allowed to reference columns in other tables.
|
|
Add primary key cmd
|
ALTER TABLE <tb name>
ADD CONSTRAINT pk_customer PRIMARY KEY CLUSTERED (Customer ID) GO |
|
Add foreign key cmd
|
ALTER TABLE <tb name>
ADD CONSTRAINT fk_order FOREIGN KEY CLUSTERED (Customer ID) REFERENCE test.Customer (Customer ID) GO |
|
Default constraint cmd
|
ALTER TABLE <tb name>
ADD CONSTRAINT df_customer DEFAULT (GETDATE()) FOR CreationDate GO |
|
Check constraint
|
ALTER TABLE <tb name>
ADD CONSTRAINT ck_customer CHECK (Subtotal > 0) GO |
|
What structure SQL uses to build and maintain indexes?
|
Balanced tree aka b-tree
B-tree is always symmetrical |
|
How big a data page?
|
8192 bytes, but only 8060 bytes of actual user data
|
|
What are the three type of pages within an index?
|
Index can contain
Root pages, Intermediate pages, and Leaf pages |
|
How you define an index
|
By using one or more columns in the table, called the index key.
|
|
What is the maximum amount of columns that can participate in index?
|
16 column
|
|
What is the maximum size of the index key?
|
900 bytes
|
|
clustering key
|
Columns defined for the clustered index.
|
|
What clustered key forces SQL to do?
|
It causes SQL Server to arrange the data in the table according to the clustering key. Because you can arrange the table on only one way you can have only one clustered index per table.
|
|
heap
|
A table without an index. SQL Server allocates pages and stores data as data is written to the table.
|
|
page chain of index
|
Show how the page is sorted logical.
|
|
nonclustered index
|
Built on the data that is stored, with the leaf level of the indexes containing a pointer to the location of the row in tables data page.
|
|
forwarding pointer
|
When SQL Server must move the row by subsequent modifications (page split, row no longer fitting in page), nonclustered indexes don't get updated with the new location of the row instead a forwarding pointer to the new location is created.
|
|
FILESTREAM_ON clause
|
Used when clustered indexes are created on a table containing FILESTREAM data.
|
|
What happens if a different filegroup is specified in FILESTREAM_ON clause than where the FILESTREAM data is currently located
|
All FILESTREAM data will be moved to the newly specified filegroup during the creation of the clustered index.
|
|
How SQL Server physically implements a primary key
|
As unique clustered index.
|
|
default option for a primary key
|
Clustered
|
|
How SQL Serve physically implements unique constraint
|
As unique index.
|
|
nonclustered index
|
Do not impose a sort order on the table
There can be up to 1000 No more than 900 bytes No more than 16 columns |
|
covering index
|
When an index is constructed in such way that can completely satisfy queries by reading only the index.
|
|
INCLUDE clause when creating a new index
|
Indexes can be created using the INCLUDE clause. Included columns become part of the index at only the leaf level.
|
|
What determines if an index should be used to satisfy a query
|
The query optimizer based on the distribution statistics.
|
|
histogram
|
When an index is created a structure is generate called histogram that stores info about the relative distribution of data values within the column.
|
|
selectivity of an index
|
The degree to which values in the column allow to locate small sets of data.
|
|
query optimizer uses the most ____ index to satisfy a query because a highly selective index allows the query processor to skip large amounts of data
|
Selectivity
|
|
filtered index
|
An index with a WHERE clause . Only the index keys matching the WHERE clause are added to the index. This allows to build indexes that focus on highly selective portions of the table while allowing SQL to use other methods for less selective ranges.
|
|
restrictions for filtered indexes(3)
|
1) Must be nonclustered
2) Cannot be created on computed columns 3) Columns cannot undergo implicit or explicit data type conversion |
|
FILLFACTOR option (for when creating an index)
|
To control the rate at which page splits occur we specify the fill factor for the index. FILLFACTOR specifies the percentage of free space that should be left on the leaf level of an index during creation or rebuild.
|
|
FILLFACTOR applies only to the ____ level of the index
|
Leaf
|
|
PAD_INDEX option (for when creating an index)
|
Used when there is a high amount of leaf level pages. Will cause the FILLFACTOR to be applied to the intermediate-level pages of the root page off an index.
|
|
SORT_IN_TEMPDB option (for when creating an index)
|
By default the worktables are created in the same DB as the index. They can be forced to be created in the TEMPDB.
|
|
IGNORE_DUP_KEY option
|
Lets have an unique index and do a multi-row insert if any of the rows in the insert is duplicating an existing in table index all rows will be drop. With IGNORE_DUP_KEY only the duplicate row will be doped from the insert all others will be accepted.
|
|
WITH ONLINE = OFF option
|
With this option SQL Server will lock the entire table while building the index.
|
|
WITH ONLINE = ON option
|
SQL Server allows changes to the table during the creating of the index by using the version store within the tempdb.
|
|
Online index creation/rebuild
Online restore are available only in |
SQL Server 2008 enterprise
|
|
XML index
|
To improve performance of queries against XML data.
|
|
There are two types of XML indexes
|
Primary
Secondary |
|
Primary XML index
|
Build against all the nodes within XML column. Tied to the table by maintaining a link to the corresponding row in the clustered index. That is why a clustered index required before you can create a primary XML index.
|
|
Secondary XML index
|
After you create a primary XML index you can have a secondary. This secondary index is built against:
PATH VALUE PROPERTY |
|
Spatial index
|
Created against a spatial column and is either geometry or geography.
|
|
Tessellation
|
Spatial data is defined using 2d coordinate system. Indexes are built in B-trees, linear structure. So to index spatial data SQL transforms 2d data into linear chain. This transformation is called tessellation.
|
|
Types of XML indexes
|
PATH
VALUE PROPERTY |
|
Index fragmentation
|
When an index is built, all the values from the index key are written in sorted order onto the pages. Removal of values creates "holes" on the index page. If data in column that the index is based on changes that causes the index to be moved which leaves another "hole". When index page fills up and require a page split additional "holes" might be created.
|
|
ALTER INDEX cmd use
|
Statement to remove the fragmentation.
|
|
REBUILD option for ALTER INDEX
|
Rebuilds all levels of the index and leave all pages filled according to the FILLFACTOR of an index.
|
|
REORGANIZE option for ALTER INDEX
|
Removes fragmentation only at leaf level. Intermediate-level pages and the root page are not defragmented during reorganize. Always online option.
|
|
What type of lock is applied on a table during index rebuild?
|
The rebuild of an index effectively re-creates the entire B-tree structure, so unless the ONLINE options is specified, shared table lock is acquired preventing any changes until the rebuild operation completes.
|
|
Disabling index(cmd)
|
ALTER INDEX { index name! ALL } on <object/
DISABLE [ ; ] |
|
If a clustered index is disabled, what happens to the table?
|
The entire table becomes inaccessible.
|
|
Enable index(cmd)
|
ALTER INDEX { index name! ALL } on <object/
REBUILD [ ; ] |
|
What happens when an index is disabled?
|
An index that is disabled is no longer used by the optimizer. In addition, as data changes in the table, any disabled index is not maintained.
|
|
Spatial index
|
Constructed by mapping the spatial objects to a four-level uniform grid and applying tessellation rules
|
|
Fix defragmented indexes
|
REBUILD or REORGANIZE option of ALTER INDEX statement.
|
|
Full text catalogs
|
The first step in building a full text index is to create a storage structure. Unlike relational indexes, full text indexes have a unique structure that is maintained within a separate storage format called a full text catalog. Each full text catalog contains one or more full text indexes.
|
|
Is SQL Server 2008 where is the content of a full text catalog stored?
|
Stored within the DB.
|
|
Is pre 2008 SQL, where is the content of a full text catalog stored?
|
You associated the full text catalog with a filegroup only for backup process, with the entire contents of the catalog maintained in a directory structure on the OS.
|
|
Generic syntax for creating a full text catalog.
|
CREATE FULLTEXT CATALOG catalog_name
[ON FILEGROUP filegroup ] [IN PATH 'rootpath'] [WITH <catalog_option>] [AS DEFAULT] [AUTHORIZATION owner_name ] <catalog_option>::= ACCENT_SENSITIVITY = {ON/OFF} |
|
CREATE FULLTEXT CATALOG catalog_name
[ON FILEGROUP filegroup ] [IN PATH 'rootpath'] [WITH <catalog_option>] [AS DEFAULT] [AUTHORIZATION owner_name ] <catalog_option>::= ACCENT_SENSITIVITY = {ON/OFF} What is the FILEGROUP option for? |
specifies the filegroup that you want to use to store any full text indexes created with the full text catalog
|
|
CREATE FULLTEXT CATALOG catalog_name
[ON FILEGROUP filegroup ] [IN PATH 'rootpath'] [WITH <catalog_option>] [AS DEFAULT] [AUTHORIZATION owner_name ] <catalog_option>::= ACCENT_SENSITIVITY = {ON/OFF} What is the IN PATH option for? |
its a depreciated clause should not be used any more, because full text indexes are now stored in the DB
|
|
Full text index
|
after you create the full text catalog, you can create the full text indexes that are the basis for searching unstructured data
|
|
On what columns can you create full text indexes
|
CHAR
VARCHAR XML VARBINARY |
|
full text index on a CHAR/VARCHAR column
|
The full text engine parses the data directly and builds an appropriate index.
|
|
full text index on a XML column
|
Loads a special processor that can understand and parse an XML document so that we are indexing the XML document but no the XML tags.
|
|
most common use of VARBINARY(MAX) columns
|
To store documents using the new FILESTREAM capabilities in SQL.
|
|
word breakers
|
Assemblies that locate breaks between words to build a list of words to be indexes.
|
|
stemmers
|
Stemmers allows queries to locate information even across multiple verb tenses.
|
|
stop word
|
For example a, the, an are stop words for English.
|
|
Full text index limitation
|
You can create full text indexes on multiple columns; however, you can create only a single full text index on a table or indexed view.
|
|
CHANGE_TRACKING option
|
This full text index option determines how SQL Server maintains the index when the underlying data changes.
|
|
What commands SQL provide to query full text data
|
CONTAINS
FREETEXT CONTAINSTABLE FREETEXTTABLE |
|
What CONTAINS, FREETEXT return
|
Return a True/False value used to restrict a result set.
|
|
what CONTAINSTABLE, FREETEXTTABLE return
|
Return a result set that can be used to extend query functionality.
|
|
FREETEXT example
|
These queries are the most basic form of full text search:
SELECT ProdDiscID, Description FROM Prod.ProductDescription WHERE FREETEXT(Description, N'bike') GO |
|
What happens to a query when we use non-Unicode string
|
All search terms used with full text are Unicode string. If a non-Unicode string is passed, the query still works, but is much less efficient because the optimize cannot use parameter sniffing to evaluate distribution statistics on the full text index. Make certain that all terms you pass in for full text search are always typed as Unicode for maximum performance.
|
|
FREETEXTTABLE example
|
Returns a result set with additional info that ranks the results in accordance to how close the match was to the original search term.
SELECT a.ProductDescriptionID, a.Description, b.* FROM Production.ProductDescritpion a INNER JOIN FREETEXTABLE(Production.ProductDescription, Description, N'bike') b ON a.ProductDescriptionID = b.[Key] ORDER BY b.[Rank] GO |
|
CONTAINS explanation
|
For queries that require greater flexibility, you would use the CONTAINS predicate, which allows you to:
1) Search word forms 2) Search for word proximity 3) Provide relative weighting to terms |
|
CONTAINS example
|
SELECT ProductDescriptionID, Description
FROM Production.ProductDescription WHERE CONTAINS(Description,N'bike') |
|
What options can you use if you want to search for word variants
|
FORMSOF - consider word stems
INFLECTIONAL - THESAURUS - produces synonyms for the search term |
|
where are the thesaurus files
|
A thesaurus file exist for each supported language. All thesaurus files are XML files stored in the FDATA directory underneath SQL installation path.
|
|
NEAR keyword
|
Because full text indexes are built against unstructured data, the index stores the proximity of one word to another in addition to indexing the words found within the data. Proximity searching is accomplished by using the NEAR keyword.
|
|
Which cmd perform fuzzy searching by default?
|
FREETEXT and FREETEXTTABLE predicates perform wildcard searches by default.
|
|
Which cmd perform proximity and synonym searches?
|
CONTAINS and CONTAINSTABLE are used for proximity, thesaurus and inflectional searches.
|
|
For what cmd SQL Server uses the language-specific thesaurus automatically
|
FREETEXT and FREETEXTTABLE queries.
|
|
What cmd key you need to specify to turn on thesaurus for CONTAINS and CONTAINSTABLE queries
|
FORMSOF THESAURUS
|
|
stop list aka noise word files
|
Used to exclude words that you do not want to include in a full text index.
|
|
In SQL Server 2005 and prev. where is the noise word files stored
|
FDATA Directory
|
|
In SQL Server 2009 where is the noise word files stored
|
Within a DB on the server.
|
|
Option for populating full text indexes
|
1) FULL
2) INCREMENTAL 3) UPDATE |
|
FULL option for populating full text index exp.
|
Reprocesses every row from the underlying data to rebuild the full text index completely.
|
|
INCREMENTAL option for populating full text index exp.
|
Processes only the rows that have changed since the last population; requires a timestamp column on the table.
|
|
UPDATE option for populating full text index exp.
|
Processes any changes since the last time the index was updated; requires that the CHANGE_TRACKING options is enabled for the index and set to MANUAL.
|
|
How you initiate a full text index
|
ALTER FULLTEXT INDEX statement
|
|
How you manage thesaurus files
|
By editing the language-specific file that is contained within the FDATA directory for the instance.
|
|
What is the cmd for creating a stop list
|
CREATE FULLTEXT STOPLIST
|
|
What is the cmd for changing a stop list
|
ALTER FULLTEXT STOPLIST
|
|
What cmd is used to associate a stop list with a full text index
|
ALTER FULLTEXT INDEX
|
|
In what version was table partitioning introduced
|
2005
|
|
What is table partitioning used for?
|
Means to split large tables across multiple structures.
Allows tables, indexes and indexed views to be created on multiple filegroups while also allowing the DBA to specify which portion of the object will be stored on a specific filegroup |
|
Partition function
|
Defines the set of boundary point for which data will be partitioned.
|
|
Example for partition function
|
CREATE PARTITION FUNCTION
mypartfunction (int) AS RANGE LEFT FOR VALUES (10,20,30,40,50,60) |
|
What data type cannot be used for partition function
|
1) text
2) ntext 3) image 4) varbinary(max) 5) timestamp 6) xml 7) vacrchar(max) Cannot use also Transact-SQL user define data types CLR data types |
|
What is the AS clause used for in CREATE PARTITION FUNCTION
|
Allows you to specify whether the partition function you are creating is RANGE LEFT or RANGE RIGHT. The LEFT and RIGHT parameters define which partition will include a boundary point.
|
|
What is the FOR VALUES clause used for in CREATE PARTITION FUNCTION
|
Used to specify the boundary points for the partition function.
|
|
Maximum amount of partitions for an object? And maximum amount of boundary points?
|
partitions 1000
points 999 |
|
CREATE PARTITION FUNCTION
mypartfunction (int) AS RANGE LEFT FOR VALUES (10,20,30,40,50,60) representation |
Partition--min val--ax val
1 --neg inf--10 2 --11 --20 3 --21 --30 4 --31 --40 5 --41 --50 6 --51 --60 7 --61 --pos inf. |
|
CREATE PARTITION FUNCTION
mypartfunction (int) AS RANGE RIGHT FOR VALUES (10,20,30,40,50,60) representation |
Partition--min val--ax val
1 --neg inf--9 2 --10 --19 3 --20 --29 4 --30 --39 5 --40 --49 6 --50 --59 7 --60 --pos inf. |
|
Can you partition an existing object after it has been populated with data?
|
Yes
|
|
What you do to partition an existing table
|
Drop the clustered index and re-create the clustered index on the partition scheme
|
|
Partition an existing index or indexed view
|
Drop the index and re-create the index on a partition scheme
|
|
partition schema
|
Defines the storage structures and collection of filegroups that you want to use with a given partition function
|
|
Can partition schema encompass more than one filegroup
|
Yes
|
|
create partition cmd
|
CREATE PARTITION SCHEME partition_scheme_name
AS PARTITION partition_function_name [ ALL ] TO ( { file_group_name / [ PRIMARY ] } [ ,...n ] ) [ ; ] |
|
CREATE PARTITION SCHEMA requirements
|
1) Any filegroup specified in CREATE PARTITION SCHEMA must already exist in the DB
|
|
What if you specify ALL keyword in CREATE PARTITION SCHEMA
|
You can specify maximum of one filegroup
|
|
Requirements for partition schemes and filegroup
|
A partition scheme must be defined in such a way as to contain a file group for each partition that is created by the partition function mapped to the partition scheme
|
|
example partitioned table
|
CREATE TABLE Employee (
EmpID int NOT NULL, FirstN varhar(50) NOT NULL, LastN varchar(50) NOT NULL,) ON myparscheme(EmpId) |
|
What is the final step in creating a partitioned table or index?
|
To create a the table or index on a partitioned scheme through a partitioning column
|
|
req. partition key
|
The partitioning key that is specified must match the data type, length, and precision of the partition function.
|
|
What if a partitioning key is a computed column
|
The computed column must be PERSISTED
|
|
creating a partitioned index
(example) |
CREATE NONCLUSTERED INDEX idx_employeefirstname
ON dbo.Employee(FirstName) ON mypartscheme(EmployeeID) GO |
|
creating a partitioned index
(expl) |
Similar to creating a partitioned table, you partition an index by specifying a partition scheme in the ON clause.
|
|
What if an index has the optional INCLUDE clause?
|
When you create an index on a partitioned table, SQL Server automatically includes the partitioning key in the definition of each index, thereby allowing you to partition an index the same way as the table is partitioned.
|
|
Which clause of the CREATE TABLE or CREATE INDEX statements is used to partition the object?
|
The ON clause is used so specify the storage structure, filegroup, or partition scheme, for the table or index.
|
|
SPLIT operator
|
Introduces a new boundary point into a partition function.
|
|
MERGE operator
|
Eliminates a boundary point from a partition function.
|
|
SPLIT / MERGE syntax
|
ALTER PARTITION FUNCTION partition_function_name()
{ SPLIT RANGE ( boundary_value ) / MERGE RANGE ( boundary_value ) } [ ; ] |
|
Altering a partition scheme
|
Add filegroups to an existing partition scheme to create more storage space for a partitioned table.
|
|
ALTER PARTITION SCHEME syntax
|
ALTER PARTITION SCHEME partition_scheme_name
NEXT USED [ filegroup_name ] [ ; ] |
|
What aligned means when talking about tables, indexes and partition functions?
|
If a table and all its indexes are partitioned using the same partition function they are said to be aligned.
|
|
SWITCH
|
Allows you to exchange partitions between tables in a perfectly scalable manner with no locking, blocking or deadlocking.
|
|
SWITCH requirements
|
1) The data and index for the source and target tables must be aligned.
2) Source and target tables must have the same structure. 3) Data cannot be moved from one filegroup to another. 4) Two partition with data cannot be exchanged. 5) The target partition must be empty. 6) The source or target table cannot participate in replication. 7) The source or target tables cannot have fill text indexes or a FILESTREAM data type defined. |
|
Can you move data from one filegroup to another or exchange two partitions with data using the SWITCH operations
|
No
|
|
BCP queryout argument
|
If you want to export only a subset if a table or the result set of a query, you can replace the name of the table with query delimited by double quotes and then specify the queryout parameter. Allow only to export data.
|
|
BCP's -n switch
|
Specifies that the data in the file is in the native format of SQL Server.
|
|
BCP's -c switch
|
Specifies that the data is in character format.
|
|
BCP's -S switch
|
Specifies the instance name to connect to.
|
|
BCP's -T switch
-U and -P |
Designates a trusted connection.
Specify Username and Password. |
|
What is the draw back of BCP?
|
Its a command line utility
|
|
dif. between BCP and BULK INSERT
|
1) BULK INSERT cannot export
2) Its T-SQL cmd and does not need to specify the instance name or login credentials. |
|
facets
|
Core object upon which your standard are built. Define type of object or option to be checked.
74 by default implemented as .Net assemblies |
|
how to get a list of available facets
|
dbo.syspolicy_management_facets
|
|
DMF
SMO |
Eeclarative management framework
Server management objects |
|
conditions
|
Within the Policy Based Management framework, conditions are the equivalent of a WHERE clause that define the criteria needing to be checked.
|
|
policy target
|
You can target a policy at the server level or user DB level like system DB and user DB.
|
|
policies
|
Policies are created for a single condition and set to either enforce or check compliance.
|
|
Policies execution modes
|
1) On Demand - evaluates the policy when directly executed by a user
2) On Change, prevent - create DDL triggers to prevent a change that violates the policy 3) On change, log only - checks the policy automatically when a change is made using the event notification infrastructure 4) On schedule - creates a SQL Server Agent job to check the policy on a defined schedule |
|
policy categories
|
Can be used to group one or more policies into a single compliant unit
|
|
subscriptions
|
To check and enforce policies you create a subscription to one or more policies
|
|
2 types of subscriptions
|
Instance and database.
Member of sysadmin role can subscribe an instance to a policy category |
|
What are the five objects that are used within Policy Based Management?
|
The objects that are used with Policy Based management are:
1) Facets 2) Conditions 3) Policies 4) Policy targets 5) Policy category |
|
What are the allowed execution modes for a policy>
|
On demand
On schedule On change Log only On change, prevent |
|
Which object has a property that allows you to mandate checking for all DB on an instance
|
Policy categories allow you to mandate checking of all DB within an instance
|
|
How many facets can be checked within a single condition
|
1
|
|
How many conditions can be checked within a single policy
|
1
|
|
LSN
DCM DMK |
Log sequence number
Differential change map DB master key |
|
differential change map
|
SQL Server tracks each extent that has been changed following a fill backup using a special page in the header of a DB called DCM.
|
|
What are the four type of backup in SQL?
|
1) Full
2) Differential 3) Transaction log 4) File/filegroup |
|
How can you detect and log corrupt page?
|
ALTER DATABASE <dbname> SET PAGE_VERIFY CHECKSUM
|
|
Under what account are backups executed
|
SQL Server Service Account
|
|
Member of sysadmin server role can backup ....?
|
Backup any DB in an instance
|
|
Member of db_owner database role can backup ...?
|
Can back up their DB
|
|
Member of db_backupoperator server role can backup ...?
|
Allow the user to back up a DB while preventing any other access to the DB
|
|
full backup
|
Captures all pages within a DB that contain data. Empty pages are not included
|
|
Steps performed by SQL to ensure concurrent access and backup consistency
|
1) Locks the DB, blocking all transactions
2) Places a mark in the transaction log 3) Releases the DB lock 4) Extract all pages in the data files and writes them to the backup service 5) Lock the DB, blocking all transactions 6) Places a mark in the transaction log 7) Releases the DB lock 8) Extract the portion of the log between the marks and appends it to the backup |
|
generic syntax to back up a DB
|
BACKUP DATABASE { database_name / @database_name_var }
TO <backup_device> [ ,...n ] [ <MIRROR TO clause> ] [ next-mirror-to ] [ WITH { DIFFERENTIAL / <general_WITH_options> [ ,...n ] } ] [;] |
|
When SQL Server strips backup across multiple devices what is the requirement for the devices
|
All devices are required
|
|
Example stripped backup
|
BACKUP DATABASE AdvWorks
TO DISK = 'AdvWorks_1.bak', DISK = 'AdvWorsk_2.bak' GO |
|
How many copies can you create using the MIRROR TO clause
|
up to 4
|
|
MIRROR TO requirements
|
1) All backup devices must be of the same media type.
2) Each mirror must have the same number of backup devices FORMAT must be specified in the backup cmd. 3) WITH |
|
INIT/NOINIT options
|
Options of a BACKUP cmd control whether an existing backup file is overwritten or appended to. NOINIT appends.
|
|
LSN
|
Log Sequence Number.
Integer value that starts at 1 and increments. to infinity. Its never reused. |
|
DCM
|
Differential change map
A special page used to track differential backups. Full backup zeros the page. As changes are made corresponding bits are set to 1. Using the info form the page SQL knows what needs to be backed up |
|
COPY_ONLY option
|
Allows to create a backup, but does not affect the database state or set of backups in production. Full backup with COPY_ONLY doest reset the DCM. Transaction log backup with COPY_ONLY does not remove transactions for the transaction log
|
|
Filegroup backups
|
If you need to reduce the footprint of a backup, you can rely on file or filegroup backup instead.
|
|
Partial backups
|
Filegroup can be marked as read-only. Because the purpose of a backup is to capture the changes there is no need to backup a readonly filegroup. Performed by specifying the READ_WRITE_FILEGROUPS option
|
|
Page corruption
|
By executing
ALTER DATABASE <dbname> SET PAGE_VERIFY_CHECKSUM SQL can detect and quarantine corrupted pages |
|
service master keys
|
Each instance have a service master key. Created automatically the first time that an instance is started. Regenerated each time SQL service account or service account password is changed.
|
|
DMK
|
Database Master Key
Is the root of the of the encryption hierarchy in a DB. To ensure access to certificates, asymmetric keys and symmetric key within the DB you need to backup the DMK |
|
what are the 4 types of backup?
|
Full
Differential Transaction log File/filegroup backup |
|
DB snapshots
|
Introduced in SQL Server 2005 providing a method to create read-only copies of the data in particular point in time.
|
|
DB snapshots and FILESTREAM feature
|
DB snapshot is not compatible with FILESTREAM. If you create a DB against a DB with FILESTREAM data, the FILESTREAM filegroup is disabled and not accessible
|
|
How you create a DB snapshot
|
CREATE DATABASE command with the AS SNAPSHOT OF clause
|
|
MSMQ
WMI |
MSMQ - Microsoft Message Queue
WMI - Windows Management Instrumentation |
|
Microsoft Office Integration(SSRS support)
|
Standard:Yes
Workgroup:Yes Express:Advanced Services only Compact:N/A |
|
Report Builder(SSRS support)
|
Standard:Yes
Workgroup:Yes Express:Advanced Services only Compact:N/A |
|
Scale-out Reporting(SSRS support)
|
Standard:No
Workgroup:No Express:Advanced Services only Compact:N/A |
|
Data-driven subscription(SSRS support)
|
Standard:No
Workgroup:No Express:No Compact:N/A |
|
Linked measures/dimensions(SSAS OLAP support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
Perspectives(SSAS OLAP support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
Partitioned Cubes(SSAS OLAP support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
Times series(SSAS Data mining support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
Parallel processing and reduction(SSAS Data mining support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
Advanced mining algorithms(SSAS Data mining support)
|
Standard:No
Workgroup:No Express:N/A Compact:N/A |
|
How the Core SQL Server components run as
|
How the Core SQL Server components run as services.
|
|
collation sequence
|
Control how SQL Server treats character data for storage, retrieval, sorting, and comparison operation. SQL allows you to specify a collation sequence to support any language currently used around the world.
|
|
at what levels can collation sequence be specified
|
Instance, db, table, column.
|
|
Max amount of SQL Server instances on a single server
|
Total of 50 - 1 default and 49 named.
|
|
SQL Server Instances
|
Define the container for all operations you perform within SQL Server. Each instance contains its own set of DB, security credentials, config settings, Windows services and other SQL Server objects.
|
|
Why we should not use Windows Service Control applet to change service accounts and passwords
|
The Windows Service Control applet also has entries for SQL Server services and allows you to change service accounts and passwords. You should never change service accounts or service account passwords using the Windows Service Control applet. SQL Server Configuration Manger needs to be used, because it includes the code to regenerate the service master key that is critical to the operations of SQ: Server services.
|
|
SQL Server Surface Area Configuration Managers
|
A utility that existed in SQL Server 2006 and was removed in SQL Server 2008 in favor of making configuration changes directly using the sp_configure system stored procedure
|
|
What are the two basic components of Database Mail?
|
Database Mail uses mail profiles which can contain one or more mail accounts
|
|
What are the two types of mail profiles that can be created?
|
Mail profiles can be configured as either public or private
|
|
cmd that enables the Database Mail feature?
|
EXEC sp_configure 'Database Mail XPs', 1
GO RECONFIGURE WITH OVERRIDE GO |
|
Activity Monitor
|
Feature of SSMS that displays query activity on the server
|
|
Notification Service
|
Is a feature that was available with SQL Server 2005 that is no longer available in SQL Server 2008
|
|
ON clause
|
All objects that contain data, tables, indexes, and indexed views have an ON clause that you can use to specify the filegroup where SQL Server stores the object. As data is written to the objects, SQL Server uses the filegroup definition to determine on which file it should store the data
|
|
Can filegroups contain more than one file?
|
Yes
|
|
Check Pointing
|
All data manipulation within SQL occurs in memory within a set of buffer. If you are adding new data to a database, the new data is first written to a memory buffer, then written to the transaction log, and finally persisted to a data file via a background process called check pointing.
|
|
When corrupt page is encountered ...
|
When a corrupt page is encountered, the page is logged to the suspects_pages table in the msdb. If the DB is participating in Database Mirroring session, SQL Server automatically retrieves a copy of the page from the mirror, replaces the page on the principal, and logs an entry it the sys.dm_db_mirroring_auto_page_repair view.
|
|
suspect_pages table in msdb
|
Log of all corrupt pages.
|
|
sys.dm_db_mirroring_auto_page_repair view
|
Holds a logs of every time a copy of the page is brought from the mirror to the principal when the server is part of db mirroring.
|
|
Simple syntax to create a schema
|
CREATE SCHEMA <schema name> AUTHORIZATION <owner name>
|
|
Each index created causes performance degradation for the following (5) cmd
|
INSERT
UPDATE DELETE BULK INSERT BCP |
|
Create non-clustered index example on Person.Address table
|
CREATE NONCLUSTERED INDEX idc_city ON Person.Address(City) INCLUDE (AddressLine1)
|
|
Example create a filtered index on Person.Address table
|
CREATE NONCLUSTERED INDEX idx_ciy2 ON Person.Address(City)
INCLUDE (AddressLine1, AddressLine2) WHERE AddressLine2 IS NOT NULL |
|
Create a spatial index on Person.Address table
|
CREATE SPATIAL INDEX sdix_spatiallocation
ON Person.Address(SpatialLocation) USING GEOGRAPHY_GRID WITH (GRIDS = (MEDIUM, LOW, HIGH), CELLS_PER_OBJECT = 64); |
|
Can you create full text indexes on multiple columns?
What about full text index on a table or indexed view? |
You can create full text indexes on multiple columns.
You can create only a single full text index on a table or indexed view. |
|
CHANGE TRACKING option on CREATE FULL TEXT INDEX
|
CHANGE_TRACKING option for a full text index determines how SQL Server maintains the index when the underlying data changes.
|
|
Full text index and full text catalog connection
|
Before creating a full text index, you must create a full text catalog that is mapped to a filegroup to contain one or more full text indexes.
|
|
In what partition are null values stored when we use a partitioning function.
|
Null values are always stored in the leftmost partition.
Until you explicitly specify null as a boundary point and use the RANGE RIGHT syntax then it will be the rightmost partition. |
|
What is the requirement for filegroup specified in the CREATE PARTITION SCHEME statement.
|
The file group must already exist in the database.
|
|
part.func mapped to part.schema
---------------------------------- CREATE PARTITION FUNCTION mypartfunction (int) AS RANGE LEFT FOR VALUES (10,20,30,40,50,60); GO CREATE PARTITION SCHEME mypartscheme AS PARTITION mypartfunction TO (Fg1, Fg2, Fg3, Fg4, Fg5, Fg6, Fg7); GO |
FG1 neg.inf - 10
FG2 11 - 20 FG3 21 - 30 FG4 31 - 40 FG5 41 - 50 FG6 51 - 60 FG7 61 - pos.imv |
|
Partition an index example
|
CREATE NONCLUSTERED INDEX ind_employeefirstname
ON dbo.Employee(FirstName) ON mypartscheme(EmployeeID); GO |
|
If you are exporting data using BCP, the account that BCP is running under needs ________ permissions on the table or view (1)
|
SELECT
|
|
If you are importing data, the account that BCP is running under needs ____________ permissions (3)
|
ALTER TABLE
INSERT SELECT |
|
When to use SQL Sever Import and Export Wizard.
|
When you want to import/export data directly between source and destination as well as apply transformations and error-handling routines, you can use the capabilities of SSIS to build packages.
|
|
Backups execute under the security context of what account
|
SQL Server service account
|
|
Member of the sys admin role can backup
Member of the db_owner can backup db_backupoperator |
sys admin can backup any db in an instance
db_owner role can backup only their db db_backupoperator is allowed to back up any db but has no other access |
|
SQL server backup types
|
Full
Differential Transaction log Filegroup |
|
To execute a transaction log backup, the database must be in what recovery model
|
Full or Bulk-logged
also Full backup must have been executed and the transaction log must not have been truncated since the last full backup. |
|
What is a dirty page?
|
All data changes occur within buffers in memory. When a change is made, the corresponding buffer is modified and a record is added to the transaction log. A modified page in the buffer pool is referred to as a dirty page.
Every dirty page tracks the LSN in the transaction log that corresponds to the change that modified the page in the buffer pool. When SQL Server executes a checkpoint, all dirty pages in the buffer pool are written out to the data files. |
|
Database snapshots
|
Feature introduced in 2005
Method to quickly create read-only copies of the database. Available only in 2008 enterprise |
|
Database snapshots And filestreams
|
The two feature are incompatible.
If you create a database snapshot against a db with filestream data the filestream filegroup is disabled and not accessible. |
|
Cmd for creating a db snapshot
|
CREATE DATABASE dnname ON
(NAME = logical_filename, FILENAME = 'os_filename') AS SNAPSHOT OF source_db_name |
|
Restrictions for db snapshot
|
1) Can't back, restore or detach db snapshot
2) db snapshot must exist on the same instance as the data source 3) Full text indexes are not supported 4) FILESTREAM is not supported, and any FILESTREAM data is inaccessible through a db snapshot 5) Can create a db snapshot against a sys db 6) You can't drop, restore, or detach db that has a db snapshot created against it 7) Can't reference filegroups that are off-line, defunct, or restoring |
|
Sparse file
|
When a db snapshot is created SQL server uses sparse file.
Sparse file is an entrain the fat table that consumes no space. As data is added to the file, the file automatically grows. |
|
Copy-on-write technology
|
An SQL server mechanism that manages db snapshots when changes occur in source db.
The first time a modification to a data page occurs SQL copies the before image to the db snapshot. |
|
Db snapshot max size
|
Because the db snapshot is at the point in time the size is the amount of data that existed in the source db at the time of creation of the snapshot
|
|
Filestream and DB snapshots notes
|
The two are not compatible, cannot use the DB snapshot for a RESTORE DATABASE operation
|
|
Each time a job executes, SQL Server logs information into the
|
dbo.sysobjhistory table in the msdb
|
|
endpoints
|
Control the capability to connect to an instance of SQL Server as well as to dictate the communications methods that are acceptable.
|
|
endpoints parts
|
transport - TCP or HTTP
payload - defines the basic category of traffic |
|
endpoints payloads
|
SOAP
TSQL SERVICE_BROKER DATABASE_MIRRORING |
|
List valid combinations of endpoint transport and endpoint payload
|
TCP - TSQL
TCP - SERVICE_BROKER TCP - DATABASE_MIRRORING HTTP - SOAP |
|
endpoint states
|
defines the access security
STARTED - the endpoint is actively listening for connections and will reply to an application STOPPED - The endpoint is actively listening but returns a connection error to an application DISABLED - The endpoint does not listen and does not respond to any connection that is attempted |
|
possible layers of security for an endpoint
|
1) Endpoint state
2) Permissions to connect to the endpoint - app must have credentials created in Sql server that have CONNECT permission assign |
|
Database Mirroring and Service Broker endpoints
|
Provide options to specify the authentication method and the encryption settings.
|
|
What is the default encryption algorithm between endpoints used by SQL Server?
|
RC4, recommended AES
|
|
Database mirroring specific arguments
|
PARTNER - participate as principals or as mirror
WITNESS - can participate only as witness ALL - can function in any role |
|
Service Broker specific arguments
|
Authentication mode - NTLM, Kerberos, NEGOTIATE
Encryption - RC4, AES Message forwarding MESSAGE_FORWARDING [ENABLED / DISABLED] Enables messages destined for a different broker instance to be forwarded When ENABLED you can specify MESSAGE_FORWARDING_SIZE |
|
principals
|
Are the means by which we authenticate and are identified within an instance or db
2 major categories: Logins/users Groups at instance and db level |
|
logins within SQL Server 2008 can be 5 different types
|
Standard SQL Server logins
Windows login Windows group Certificate Asymmetric keys |
|
Standard SQL Server login
|
Created by DBA config with username and password.
Stored in msdb and assigned a local security identifier within SQL Server. |
|
SQL Server logins mapped to certificates or asymmetric keys
|
possible.
Does not provide a means to authenticate to the instance. Logins mapped to certificates and asymmetric keys are used internally as a security container. |
|
CREATE a LOGIN cmd
|
CREATE LOGIN loginName {Options}
Password Sid Windows |
|
CREATE LOGIN and CHECK_POLICY option
|
The default and recommended setting
SQL Server 2008 will enforce Windows password policy |
|
CREATE LOGIN and CHECK_EXPIRATION option
|
Used to prevent brute force attacks
|
|
ALTER LOGIN <loginname> DISABLE
|
To prevent access you can revoke permissions from a login, after that you will need to reestablish permissions. By disabling the login you can prevent access while keeping the permissions.
|
|
Roles in SQL Server
|
same functionality as groups in Windows
|
|
bulkadmin
dbcreator diskadmin processadmin |
bulkadmin - administer BCP and BULK INSERT operations
dbcreator - create DBs diskadmin - manage disk resources processadmin - manage connections and start or pause an instance |
|
securityadmin
serveradmin setupadmin sysadmin |
securityadmin - create, alter, and drop logins cant change passwords
serveradmin - disk management process management, manage endpoint, change instance settings and shot down the instance setupadmin - mange linked servers sysadmin - perform any action within the instance |
|
SQL Security works on the Principle of
|
No access by default
You need to be explicitly granted permission to an action or DB |
|
CREATE USER syntax
|
CREATE USER user_name
[ { { FOR / FROM } { LOGIN login_name / CERTIFICATE cert_name / ASYMMETRIC KEY asym_key_name } / WITHOUT LOGIN ] [ WITH DEFAULT_SCHEMA = schema_name ] |
|
loginless user
|
It is possible to create a user in the database that is not associated to a login.
|
|
application roles
|
Prior to SQL Server 2005 if you wanted to allow users to access a DB only when a specific application was being used you used an application role. Application role is created with a password and assigned permissions. Users will specify the pass for the application role to gain access to the DB under the application role's security context.
|
|
loginless users and application roles
|
loginless users are designed to replace application roles.
Users still authenticate to the instance using their credentials. The users login needs access to the DB. After SQL Server changes the user's context to the DB the user impersonates the loginless user to gain necessary permission. Because the user is authenticating to the instance using his credential SQL can still audit the user activity. |
|
db_accessadmin
db_backupoperator db_datareader db_datawriter |
db_accessadmin - add/rem users
db_backupoperator - backup db no restore or view info db_datareader - issue select against all tables, views, func within a DB db_datawriter - issue INSERT, UPDATE, DELETE, MERGE against all tables within the DB. Member of this role must be members of the db_datareader role |
|
db_ddladmin
db_denydatareader db_denydatawriter db_owner |
db_ddladmin - execute DDL statements
db_denydatareader - prevent SELECT against all tables, views, func within a DB db_denydatawriter - prevent INSERT, UPDATE, DELETE, MERGE db_owner - owner of the DB that has full control over the DB and all objects contained within the DB |
|
db_securityadmin
public |
db_securityadmin - manage the membership of roles and associated permission, but cannot manage membership for the db_owner role
public - default group in every DB that all users belong to |
|
accounts that are considered administrative accounts
|
members of sysadmin role fixed server role
members of db_owner fixed database role the sa account |
|
securables
|
are the objects on which you grant permissions
Every object is a securable Securables can be nested |
|
schemas
|
Created in a DB require an owner
Provide the containers that own all object within a DB and in turn a schema is owned by a database users.So you can drop an user without affecting the object. |
|
Permissions
|
Proved the authority for principals to perform actions within an instance or DB.
You add permission with GRANT You remove permission with DENY DENY overrides GRANT |
|
REVOKE statement
|
Remove permission entries for the object referenced.
|
|
Metadata Security
|
SQL Server secures all the metadata within the system such that you can view only the objects within an instance or DB on which you have permission
|