Script to script out the default constraints in  a database:

 

select

'ALTER TABLE [' + SCHEMA_NAME(dc.schema_id) + '].[' +t.name + ']

ADD CONSTRAINT '+ dc.name +' DEFAULT ' + dc.definition +' FOR '+ c.name from sys.default_constraints  as dc

inner join sys.tables t

on dc.parent_object_id = t.object_id

inner join sys.columns c

on dc.parent_column_id = c.column_id

and dc.parent_object_id = c.object_id

What is the query store?

The query store is a very easy way to track the queries happening in your database, with pre made
charts and filters, queries can be aggregated on cpu, reads, execution count, ….
You can also force query plans, so a query always uses a certain plan. We will be using the force
query plan further down this blog post.

Scenario:

You just migrated to a newer version of SQL Server (SQL 2016 and upwards) and now, a query that
had no issues previously, is now behaving worse than it should.
The most common reason for this are the query optimizer changes added in SQL 2014. Which causes
this issue to occur when migrating from SQL 2012 or even older versions.
Most of the time, the query optimizer should pick the better plan, but in some cases, this does not go
as planned.

The reason(s):

This might be because your query outdated, uses syntax that is already not the best approach
(functions in joins or where columns, implicit conversions , … .)
If that is the case, the best approach would be to rewrite it. Sometimes that is not possible. Reasons
could be: the application does not allow it, it needs to be thoroughly tested first, etc.

If changing the query structure is not possible, the query store might be your solution.
An important thing to note is that, when forcing a plan for a specific query, not only plans in
compatibility mode 130 or 140 can be captured, but also in compatibility 110 or lower.

A extra factor is, that your queries need to be already parameterized, so they use the same query plan for different parameters.

Step 1 Change the compatibility of your database to 130 or 140, and turn the
query store on

 

alter database testdb set compatibility_level = 140;
ALTER DATABASE testdb SET QUERY_STORE = ON;

If you’d like to, you can execute the problematic query first, or directly execute it under a lower
compatibility level.

Step 2 Execute and monitor the query

1) First execute the query that is no longer problematic in the lower compatibility level.
2) Afterwards, find your query in the query store:
To find your query id in the query store, you could execute the following statemetns:

 

SELECT q.query_id, t.query_sql_text, object_name(q.object_id) AS parent_object
FROM sys.query_store_query_text t JOIN sys.query_store_query q
ON t.query_text_id = q.query_text_id
WHERE t.query_sql_text LIKE N'%Enter query here %'

For my query i executed the following statement:

 

SELECT q.query_id, t.query_sql_text, object_name(q.object_id) AS parent_object
FROM sys.query_store_query_text t JOIN sys.query_store_query q
ON t.query_text_id = q.query_text_id
WHERE t.query_sql_text LIKE N'%select top(@p5)%'

Result:

 

 

Go to the query store folder in your database,

and select tracked queries –>  view tracked queries

Enter the query id in the tracking query pane:

 

 

press enter

 

You see, that the one way up top, was when executing the query in compatibility level 140, which had way more reads and cpu time ( > 30 seconds elapsed vs < 1 second).

 

Step 3: Force the query plan that was used under compatibility mode 110 or lower

 

 

press yes.

 

Step 4: set the compatibility level back to 140

alter database testdb set compatibility_level = 140;

End notes

And that’s all there is to it, the better query plan in compatibility mode 110 will be used for future
executions in higher compatibility modes.
There are definitely better solutions than this, but this might be helpful for some of you.
There are ways that plan forcing can fail, for example if the database name changes, plan forcing will
stop working, since query plans use a three part naming convention.

 

Other ways to accomplish this

– Use the following trace flag to mimic pre 2014 query optimizer, but that also requires
changes to the query itself:
OPTION (QUERYTRACEON 9481).
– You could manually force a plan with the USE PLAN query hint.
– Change the databases compatibility level entirely.
– Change the query itself.
– See if statistics / index changes help the query.

I recently needed to script out some extended properties, and create them on a different database.

This might help some of you when migrating, scripting out your objects, …:

SELECT cast('EXEC sp_addextendedproperty N' +char(39)+
cast(ep.name as nvarchar(255))+char(39)+', N'+char(39)+
cast(ep.value as nvarchar(255))+char(39)+', '+ char(39)+'SCHEMA' + char(39)
+ ', N'+char(39)+ cast( S.name as nvarchar(255)) +char(39)+', '+char(39)+ 'TABLE'
+ char(39)+', N'+char(39)+ cast(T.name as nvarchar(255)) +char(39)+', '+char(39)+
'CONSTRAINT' + char(39)+ ', N'+ char(39)+ cast(fk.name as nvarchar(255))
+ char(39) as nvarchar(400))

FROM sys.extended_properties EP

inner join sys.foreign_keys fk on EP.major_id = fk.object_id

inner join sys.tables t on fk.parent_object_id = t.object_id

LEFT JOIN sys.schemas S on fk.schema_id = S.schema_id

LEFT JOIN sys.all_objects O ON ep.major_id = O.object_id

 

 

Example of the result:

EXEC sp_addextendedproperty N'FK_CHECK_Filter', N'(1 = 0)', 'SCHEMA', N'stp', 'TABLE', N'CRA', 'CONSTRAINT', N'FK_CRA_Country'

Intro

For a customer, I recently migrated one of their databases from 2012 to 2017, whilst putting compression on all the indexes.

The process of recreating the Indexes on 8 new filegroups took ~3hours for a 400GB database.

Specs

From SQL Server 2012 (11.0.7001)

To SQL Server 2017 CU 1 (14.0.1000)

Windows Server 2012 R2

 

Query execution

When manually executing the queries, life was bliss.

When putting the queries needed to complete the task in a job with a few steps for automation purposes, the job completed since there was try – catch logic present.

When I checked the errors captured I was greeted with this message, which was not present when I manually ran the queries:

“CREATE INDEX failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.”

 

What is the QUOTED_IDENTIFIER setting?

Setting this on means that characters enclosed in double quotes, “ “ are treated as identifiers instead of literals. The default for this is ON, and this is a general better practice.

From Microsoft docs:

“When SET QUOTED_IDENTIFIER is ON (default), all strings delimited by double quotation marks are interpreted as object identifiers…”

“When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be quoted and must follow all Transact-SQL rules for identifiers. “

“SET QUOTED_IDENTIFIER must be ON when you are creating a filtered index.” https://docs.microsoft.com/en-us/sql/t-sql/statements/set-quoted-identifier-transact-sql

Since I had filtered indexes, QUOTED_IDENTIFIER needed to be ON at all times when executing the query’s.

 

Testing

If I execute the following statement under my account (sysadmin) on the database to be compressed

DECLARE @QUOTED_IDENTIFIER VARCHAR(3) = 'OFF';

IF ( (256 & @@OPTIONS) = 256 ) SET @QUOTED_IDENTIFIER = 'ON';

SELECT @QUOTED_IDENTIFIER AS QUOTED_IDENTIFIER;
QUOTED_IDENTIFIER
ON

 

When I put the query to check the quoted identifier in a job and run it, (in the user database to be compressed) this happens:

 

create table dbo.QUOTEDIDENTIFIER_CHECK(QUOTEDIDENTIFIER varchar(3)) 
DECLARE @QUOTED_IDENTIFIER VARCHAR(3) = 'OFF'; IF ( (256 & @@OPTIONS) = 256 ) 
SET @QUOTED_IDENTIFIER = 'ON'; SELECT @QUOTED_IDENTIFIER AS QUOTED_IDENTIFIER; 
insert into dbo.QUOTEDIDENTIFIER_CHECK(QUOTEDIDENTIFIER) select @QUOTED_IDENTIFIER

 


select * from dbo.QUOTEDIDENTIFIER_CHECK;


QUOTED_IDENTIFIER
OFF

 

A simple approach to resolve this was to set it to ON.

 

QUOTED_IDENTIFIER
ON

 

And that is what I did, added SET QUOTED_IDENTIFIER ON; to the executed statements.

But, I also wanted to know what causes this.

 

More testing

When tested with the job on a different server :11.0.7001.0

QUOTED_IDENTIFIER
OFF

On yet another server: 14.0.3015.40

QUOTED_IDENTIFIER
OFF

 

After playing around with the connections at server and database level, I found what caused this.

 

The Culprit

 

When checking the profiler when executing the job, I found that it gets explicitly set to off before the step execution. Under the SQLSERVERAGENT service account (also not best practice but it illustrates the point better).

Closer look:

Executed by the agent:

Seems that this is default behavior for the Agent service. For now, I have not found a better way than explicitly setting quoted identifier on .

The reason behind this behavior, is something that I am still looking for, since it is not documented in the Microsoft docs on quoted identifier.

https://docs.microsoft.com/en-us/sql/t-sql/statements/set-quoted-identifier-transact-sql

 

Easy management with customized dashboards

SQL Operations Studio is a standalone application that lets you put frequently used queries into a dashboard for a quick overview of all your databases. The software is lightweight and uses very little resources, so it can effortlessly run next to more demanding tools like SQL Server Management Studio.

Using queries on your databases to get information such as available disk size, isn’t new. Before, you could manually run scripts on each database to extract statistics and then combine them. However, if you need to check this data on a daily basis, it becomes quite cumbersome and time-consuming. After a one-time setup per query, Operations Studio runs them for you and presents the results in clear graphs on a dashboard. Want to see the status of available disk size the next day too? Just refresh. With SQL Operations Studio you can now get insight into your databases with a push of the button.

 

Common graphs to use in SQL Operations Studio

We’ve tinkered with queries and creating a dashboard in SQL Operations Studio. Below you’ll find a couple of useful graphs to include with the needed code. Keep in mind, though that not every graph works for every query.

 

Available disk size

Query:

SELECT DISTINCT SUBSTRING(volume_mount_point, 1, 1) as drive_letter,

(total_bytes/1024/1024 - available_bytes/1024/1024) as usedsize_mb,

available_bytes/1024/1024 as available_size_mb

FROM sys.master_files AS f

CROSS APPLY

sys.dm_os_volume_stats(f.database_id, f.file_id)

CROSS APPLY

sys.dm_io_virtual_file_stats(f.database_id, f.file_id)

Graph:

Fig1. Okay for a pie chart

 

 

If we convert this pie chart to a bar chart however, we get a distorted bar chart, and a very imprecise view of the data.

Fig2. Not okay for a bar chart.jpeg

 

And if you try to add even more values, like wait stats, things go terribly wrong:

Fig3. Whoops

 

How many databases are offline or online:

Fig5. Database state

Script:

select d.state_desc,

count(d.state_desc) as state

from   sys.databases d

where replica_id is NULL

group by d.state_desc

How many logins have sysadmin rights

Fig6. Sysadmins <-> NonSysAdmins  (and yes, my security policy sucks.)

Script:

SELECT (select count(*) from sys.syslogins where  sysadmin <> 1)   as amount_of_logins_not_sysadmin,
			(select count(sysadmin)  from sys.syslogins where  sysadmin = 1) as amount_of_sysadmins

How many jobs have been scheduled or need to be scheduled

Fig 7. Looks like some jobs need to be scheduled.

Script:

SELECT (select count(s.name) FROM   msdb..sysschedules s 
inner join msdb..sysjobschedules a on s.schedule_id = a.schedule_id 
inner join msdb..sysjobs j on a.job_id = j.job_id ) as amount_of_jobs_scheduled,

(select (select count(job_id) from msdb..sysjobs)  - 
(select count(s.name) FROM   msdb..sysschedules s 
inner join msdb..sysjobschedules a on s.schedule_id = a.schedule_id 
inner join msdb..sysjobs j on a.job_id = j.job_id )) as amount_of_jobs_not_scheduled

Get InstanceInformation

Fig 8. InstanceInformation

This one is a bit trickier. The script is bigger and the widget and chart are customized, since the default settings could not display the graph. We created the count chart using create insight and tweaked the JSON settings.