Quantcast
Channel: SQL Server Blog : sql
Viewing all 552 articles
Browse latest View live

PowerShell Scripts – get-process with SQL Server process

$
0
0

Working with powershell scripts can be interesting. I have in the past shown a number of such scripts that we can use with SQL Server. In this blog, I was playing around understanding how I can use the get-process commandlet and how it can be used with SQL Server. This exploration and playing around has got me to write this rather simple yet something useful blog that you might use in your environments.

I start off my looking for the SQL Server process information from the list of processes available running inside a system.

#list all running process
Get-process 

#list only sql server running processes
Get-process sqlservr  

As you can see the output would look something like this. Here we have the host’s process id too.

PowerShell Scripts - get-process with SQL Server process get-process-01

There is additional information we can get from the same, but that we will reserve for some other day. What I was interested in understanding how we can extend and what all members can be called as an extension.

#Below cammand will list properties that we can list with get-process.
Get-Process sqlservr | Get-Member -MemberType Properties  

Now this gives us all the extensions we can use with the Get-Process commandlets. The output will look like below:

PowerShell Scripts - get-process with SQL Server process get-process-02

What I was experimenting was if there is a method to copy content from the output window. First I was playing around to understand how to copy from the command line window. It was interesting and nothing new. But what I found was an interesting extension that can help you copy the output to clipboard.

#To save time of copying the output.We can put the output in
#clipboard so that we can directly copy the output at our end.
Get-Process sqlservr | select ProcessName, id, StartTime, 
FileVersion, Path, Description, product | clip

Now that we have it in our clipboard, we can paste the same into any of the application of choice.

I thought this was worth a share as I play more with SQL Server and PowerShell combinations. Though such learnings get me started with some simple scripts, I would love to learn from you to how you use PowerShell with SQL Server in your environments.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on PowerShell Scripts – get-process with SQL Server process


PowerShell – Tip: How to Format PowerShell Script Output?

$
0
0

I have been writing on various ways of working with PowerShell and how to connect to SQL Server. Personally, when I see text on command prompt it is quite a mess and very difficult to decipher the same. If you play around and look at various blog posts, they show some interesting outputs even though they work to format PowerShell.

In this blog post, let me take you through 3 of the most common ways in which people format works with PowerShell script. This is not always the exhaustive way of working, but a great start and you will surely start falling in love with this capability. Trust me on it.

Method 1: This is an age old classic wherein we can format the output from a PowerShell script in a formatted table (ft) to be short. A typical command looks like:

#Demo Format-Table cmdlet. Alias ft
invoke-sqlcmd "Select * from sys.dm_exec_connections" -ServerInstance . | ft 

PowerShell - Tip: How to Format PowerShell Script Output? format-powershell-01

As you can see above, the output of our SQL query is not formatted in a nice table structure and is properly delimited. I am generally a big fan of using this with my scripts as it is easily readable.

Method 2: This is yet another variation of the same output, but this time we can take the output and make it into a formatted list (fl). A typical usage of this would look like:

#Demo Format-List cmdlet. Alias fl
invoke-sqlcmd "Select * from sys.dm_exec_connections" -ServerInstance . | fl 

There are people who like to see these as property sheets and I am not inclined to this output in general. But I am sure there will be use cases wherein it would make complete sense to have this output.

PowerShell - Tip: How to Format PowerShell Script Output? format-powershell-02

Option 3: This is a revelation for me. I was using PowerShell ISE and one of the output is to use the GridView. This can be used like:

#Demo Format-List cmdlet. Alias Gridview
invoke-sqlcmd "Select * from sys.dm_exec_connections" -ServerInstance . | Out-Gridview 

PowerShell - Tip: How to Format PowerShell Script Output? format-powershell-03

As you can see, the output now is in the format of a window just like what I am used to with SQL Server Management Studio. This was an awesome capability I personally felt.

I am sure many of you are power users and might have used these in different ways. Please let me know which of the output you like the most and let me know if there are any other methods that I need to know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on PowerShell – Tip: How to Format PowerShell Script Output?

SQL SERVER – Displaying SQL Agent Jobs Running at a Specific Time

$
0
0

Recently I was troubleshooting at a customer location something that looked trivial. When the customer approached me for a consulting requirement wherein they were saying their system was going unresponsive every day in the morning around a certain time. They were clueless to what is happening and why this was the case almost every day in the week. I got curious to understand what was going wrong with SQL Agent Jobs.

Some of these problems can take a really long time or some of them can be as simple as you think. Here I was clueless to what was the problem. When I got into active discussion with the team, I was curious, there was something they were not telling me. After random troubleshooting with the team and using tools like PerfMon, Profiler etc – I figured out there was a background process running at that time.

SQL SERVER - Displaying SQL Agent Jobs Running at a Specific Time jobsrunning-800x486

I asked the team if there were any Agent jobs that were running at that time. I could see they were clueless and were looking at each other. One developer jumped to put the ball on my court by asking if there is a way to find if there are any methods or script to help them find if any jobs were running. I had to get to my scripts bank that I use and I figured out there was already one handy with me.

Listing SQL Agent Jobs Running at a Specific Time

SELECT * FROM
(
 SELECT JobName, RunStart, DATEADD(second, RunSeconds, RunStart) RunEnd, RunSeconds
 FROM
 (
  SELECT j.name AS 'JobName',
    msdb.dbo.agent_datetime(run_date, run_time) AS 'RunStart',
    ((jh.run_duration/1000000)*86400) 
    + (((jh.run_duration-((jh.run_duration/1000000)*1000000))/10000)*3600) 
    + (((jh.run_duration-((jh.run_duration/10000)*10000))/100)*60) 
    + (jh.run_duration-(jh.run_duration/100)*100) RunSeconds
  FROM msdb.dbo.sysjobs j 
  INNER JOIN msdb.dbo.sysjobhistory jh ON j.job_id = jh.job_id 
  WHERE jh.step_id=0 --The Summary Step
 ) AS H
) AS H2
WHERE '2016-05-19 10:16:10' BETWEEN RunStart AND RunEnd
ORDER BY JobName, RunEnd

I personally found this handy and the problem was solved as soon as this query ran. They figured out some batch process that was recently deployed and instead of running them at 10PM the administrator had mistakenly scheduled this for 10AM.

This revelation was an eye opener to what one needs to do while doing the configurations. I felt such a simple task can sometimes take ages to solve or a human error can bring the system down so easily. I think I learnt something new and felt this learning can be useful to you too. Do let me know if you find this script about SQL Agent Jobs useful or feel free to extend the same and share via comments.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Displaying SQL Agent Jobs Running at a Specific Time

SQL SERVER – Fix: Error: Msg 1904 The statistics on table has 65 columns in the key list

$
0
0

SQL SERVER - Fix: Error: Msg 1904 The statistics on table has 65 columns in the key list statisticserror With SQL Server 2016, I have come to know some of the restrictions which were applicable earlier are no longer the limits to look for. In one such experimentation is what I stumbled upon the blog post: SQL SERVER – Fix: Error: Msg 1904, Level 16 The statistics on table has 33 column names in statistics key list. The maximum limit for index or statistics key column list is 32.

I thought having 32 columns itself was something far too many but to my surprise when I used the same script from the blog to just realize how this error was not popping up.

I went into a mode of exploration to find when the error would pop-up. The script this time was:

DROP DATABASE IF EXISTS TestDB
GO
CREATE DATABASE TestDB
GO
USE TestDB
GO
CREATE TABLE Test1
(ID1 INT,  ID2 INT, ID3 INT, ID4 INT, ID5 INT, ID6 INT,
ID7 INT, ID8 INT, ID9 INT, ID10 INT, ID11 INT, ID12 INT,
ID13 INT, ID14 INT, ID15 INT, ID16 INT, ID17 INT, ID18 INT,
ID19 INT, ID20 INT, ID21 INT, ID22 INT, ID23 INT, ID24 INT,
ID25 INT, ID26 INT, ID27 INT, ID28 INT, ID29 INT, ID30 INT,
ID31 INT, ID32 INT, ID33 INT, ID34 INT, ID35 INT, ID36 INT,
ID37 INT, ID38 INT, ID39 INT, ID40 INT, ID41 INT, ID42 INT,
ID43 INT, ID44 INT, ID45 INT, ID46 INT, ID47 INT, ID48 INT,
ID49 INT, ID50 INT, ID51 INT, ID52 INT, ID53 INT, ID54 INT,
ID55 INT, ID56 INT, ID57 INT, ID58 INT, ID59 INT, ID60 INT,
ID61 INT, ID62 INT, ID63 INT, ID64 INT, ID65 INT)
GO

And for creating the statistics, I have used the below:

CREATE STATISTICS [Stats_Test1] ON [dbo].[Test1] (ID1,
ID2,ID3,ID4,ID5,ID6,ID7,ID8,ID9,ID10,ID11,ID12,ID13,ID14,ID15,
ID16,ID17,ID18,ID19,ID20,ID21,ID22,ID23,ID24,ID25,ID26,ID27,ID28,
ID29,ID30,ID31,ID32,ID33,ID34,ID35,ID36,ID37,ID38,ID39,ID40,ID41,
ID42,ID43,ID44,ID45,ID46,ID47,ID48,ID49,ID50,ID51,ID52,ID53,ID54,
ID55,ID56,ID57,ID58,ID59,ID60,ID61,ID62,ID63,ID64,ID65)
GO

Msg 1904, Level 16, State 2, Line 21
The statistics ‘Stats_Test1’ on table ‘dbo.Test1’ has 65 columns in the key list. The maximum limit for statistics key column list is 64.

As you can see, now the maximum limit for statistics for key column list has been doubled to 64. So the new limit for the same has changed with SQL Server 2016. I am sure you are going to hit this limit if you are going to create an index or statistics having columns more than 64.

At this moment I would like to bring a note to Maximum Capacity Specifications for SQL Server page on MSDN for your reference because this is the root page for all these limits that one needs to be aware off. Have you ever hit this limit in your environments or scripts? Do let me know via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Fix: Error: Msg 1904 The statistics on table has 65 columns in the key list

SQL Server Compression 101

$
0
0

The data compression feature in SQL Server can help reduce the size of the database as well as improve the performance of intensive I/O intensive workloads, especially on data warehouses. This performance boost in I/O is offset against the extra CPU resource that is required to compress/decompress the data whilst data is exchanged with the application. With data warehouses, the guideline is to compress all objects as there is typically CPU capacity whilst the data storage and memory capacity is at a premium, so Microsoft’s recommendation is to compress all objects to the highest level. However, if there are no compressed objects at all, it is better to take the cautious approach and to evaluate each database object individually and the effects on the workload, particularly if the CPU headroom is limited.

SQL Server 2008 introduced two levels of compression: row level compression and page level compression. (note: SQL Server 2014 introduced two levels of compression: columnstore and columnstore_archive, but we will not be discussing these as I blogged about them before) Row compression applies variable length to fixed length data types (eg an int type may not be using all the space reserved for it on disk because it is a small number) amongst other storage saving techniques. Page compression applies row compression as well as implementing prefix and dictionary compression (ie looking for patterns in the data and rather than repeating those patterns it marks how many times a pattern is repeated.)

Compression is applied at an object level (table/index/indexed view) rather than at the database or instance level. Compression is an Enterprise-only feature (also available in Developer), and as such any databases that have compression applied can only be restored to other Enterprise and Developer based SQL Server Instances. You can check what features are already enabled in a database by querying the dm_db_persisted_sku_features table:

sku_featuresThis script will find all compressed objects. The results are grouped by table and compression type as partitioned tables may have different types of compression enabled per partition. If we do not include the “group by”then all partitions will be listed regardless of whether the compression type is different or not, and that looks really messy.

SELECT SCHEMA_NAME(sys.objects.schema_id) AS [Schema]
,OBJECT_NAME(sys.objects.object_id) AS [Object]
,[data_compression_desc] AS [COmpressionType]
FROM sys.partitions
INNER JOIN sys.objects ON sys.partitions.object_id = sys.objects.object_id
WHERE data_compression > 0
AND SCHEMA_NAME(sys.objects.schema_id) <> 'SYS'
GROUP BY SCHEMA_NAME(sys.objects.schema_id)
,OBJECT_NAME(sys.objects.object_id)
,[data_compression_desc]
ORDER BY [Schema]
,[Object]

There’s the option to default data compression on objects, but this can be risky because if you go compressing all objects without considering your workload you’ll kill CPU performance because it’s compressing objects that may not even bring any space saving benefits.

It’s also important that, if your database is in source control (and if it isn’t why on earth not), objects that are compressed must explicitly be declared with compression, otherwise when a dacpac is compared against the databaase any inconsistencies in compression are taken into account. So this means that not only are uncompressed objects compressed, but compressed objects are uncompressed. This can take a very long time, and could cause disks to run out of space, causing deployments to fail.


SQL SERVER – FIX Error: Maintenance plan scheduled but job is not running as per schedule

$
0
0

SQL SERVER - FIX Error: Maintenance plan scheduled but job is not running as per schedule anothererror This was one of the interesting issues I solved in many days. One of my clients contacted me and told that they have scheduled a maintenance plan to take t-log backup at 10 PM but it’s not running. When we look into the job history, it was not showing any history. Some problem statements like these are interesting because they look trivial and simple – yet they are convoluted and not straightforward to solve. Let us learn how to solve Error “Maintenance plan scheduled, but the job is not running as per schedule

I asked them to show the problem so that I can see live and look at various things happening. I asked them to share LOG folder which contains SQLAgent.out files.

SQL SERVER – Where is ERRORLOG? Various Ways to Find ERRORLOG Location

When I looked into the file, I was able to find interesting messages like below.

2016-06-29 20:00:00 – ! [298] SQLServer Error: 229, The EXECUTE permission was denied on the object ‘sp_sqlagent_log_jobhistory’, database ‘msdb’, schema ‘dbo’. [SQLSTATE 42000] (ConnExecuteCachableOp)
2016-06-29 20:10:36 – ! [298] SQLServer Error: 229, The SELECT permission was denied on the object ‘sysjobschedules’, database ‘msdb’, schema ‘dbo’. [SQLSTATE 42000] (SaveAllSchedules)
2016-06-29 20:10:36 – ! [298] SQLServer Error: 229, The UPDATE permission was denied on the object ‘sysjobschedules’, database ‘msdb’, schema ‘dbo’. [SQLSTATE 42000] (SaveAllSchedules)
2016-06-29 20:10:36 – ! [376] Unable to save 1 updated schedule(s) for job T-log Backup 10 PM.Subplan_1

So above was the problem in Agent log file at 8 PM. I asked to open maintenance plan to try to save schedule it again. As soon as we did that, there was no error raised, but the job was not reflecting that schedule.

Fix/ Solution / Workaround:

I captured profiler while saving the Maintenance plan and found that permission from the SQL Agent account was not sufficient as I was seeing the error 298.

As per documentation: (Select an Account for the SQL Server Agent Service)

The account that the SQL Server Agent service runs as must be a member of the following SQL Server roles:

  • The account must be a member of the sysadmin fixed server role.
  • To use multiserver job processing, the account must be a member of the msdb database role TargetServersRole on the master server.

Later client informed me that this all started happening when they followed an article on internet to harden the security.

Moral of the story: Never trust on internet advice as not everything would be true. Always look at author and check his/her reliability.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – FIX Error: Maintenance plan scheduled but job is not running as per schedule

SQL SERVER – Unable to Recycle ERRORLOG Using sp_cycle_errorlog

$
0
0

There are many DBAs who create jobs in SQL Server to recycle ERRORLOG files every midnight. They also increase the files from 6 to a bigger number so that they have data for more than 6 days.

SQL SERVER - Unable to Recycle ERRORLOG Using sp_cycle_errorlog error-log-01-800x254

Here is the blog where I wrote about sp_cycle_errorlog. SQL SERVER – Recycle Error Log – Create New Log file without a Server Restart

You can read comments on above blog and see various usage scenarios.

One of the blog reader quoted my blog and said that I am unable to recycle ERRORLOG file using sp_cycle_errorlog command. I asked to share the ERRORLOG file content while running the command. Here is what we see in ERRORLOG.

2016-09-09 08:24:37.45 spid70 Attempting to cycle error log. This is an informational message only; no user action is required.
2016-09-09 08:24:37.46 spid70 Error: 17049, Severity: 16, State: 1.
2016-09-09 08:24:37.46 spid70 Unable to cycle error log file from ‘E:\MSSQL10_50.INSTANCE1\MSSQL\Log\ERRORLOG.5’ to ‘E:\MSSQL10_50.INSTANCE1\MSSQL\Log\ERRORLOG.6’ due to OS error ‘1392(The file or directory is corrupted and unreadable.)’. A process outside of SQL Server may be preventing SQL Server from reading the files. As a result, errorlog entries may be lost and it may not be possible to view some SQL Server errorlogs. Make sure no other processes have locked the file with write-only access.”

I asked to open file in SSMS and he informed that he is getting error.

.Net SqlClient Data Provider Unicode file expected.

All of these point to corruption to the file in such a way that SQL is not able to read them correctly. Strangely, they were able to open via notepad.

SOLUTION

I have asked them to make sure that there is no corruption on the E drive. Since we were able to read the files via notepad, I told them to move “bad” file to some other location and then try.

We moved all ERRORLOG.* files to a different folder and after that recycle worked well.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Unable to Recycle ERRORLOG Using sp_cycle_errorlog

SQL SERVER – Unable to Bring SQL Online – DoSQLDataRootApplyACL : Failed to Create Directory Tree at SQLDataRoot

$
0
0

Here is one of email which I received from one of my clients where he mentioned that he was unable to bring SQL Online.

Pinal,
I need assistance in getting the failed SQL Server resource online on the node named “SQLDBN01” for the Cluster named “DBCLUSTER01 “.

I checked the event log and have all generic messages. Do you have any expert trick?

Thanks!

This was one of the shortest email which I have ever received, asking for assistance.  I have sent shortest reply.

<Reply>Please try
SQL SERVER – Steps to Generate Windows Cluster Log?</Reply>

There shared cluster log and below is the relevant section.

INFO [RES] SQL Server : [sqsrvres] Dependency expression for resource ‘SQL Network Name (LGA-DB1)’ is ‘([a15276ac-8bf2-d013-8ab5-4e09eb594606])’
ERR [RES] SQL Server : [sqsrvres] Worker Thread (C8107C10): Failed to retrieve the ftdata root registry value (hr = 2147942402, last error = 0). Full-text upgrade will be skipped.
WARN [RES] SQL Server : [sqsrvres] Worker Thread (C8107C10): DoSQLDataRootApplyACL : Failed to create directory tree at SQLDataRoot.
ERR [RES] SQL Server : [sqsrvres] SQL Cluster shared data upgrade failed with error 0 (worker retval = 3). Please contact customer support
ERR [RES] SQL Server : [sqsrvres] Failed to prepare environment for online. See previous message for detail. Please contact customer support
INFO [RES] SQL Server : [sqsrvres] SQL Server resource state is changed from ‘ClusterResourceOnlinePending’ to ‘ClusterResourceFailed’
ERR [RHS] Online for resource SQL Server failed.

Here are the warning and errors

  1. Failed to retrieve the ftdata root registry value (hr = 2147942402, last error = 0). Full-text upgrade will be skipped.
  2. DoSQLDataRootApplyACL : Failed to create directory tree at SQLDataRoot.
  3. SQL Server <SQL Server>: [sqsrvres] SQL Cluster shared data upgrade failed with error 0 (worker retval = 3). Please contact customer support

SOLUTION/WORKAROUND

I asked them to check registry values at

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL11.MSSQLSERVER\Setup

For SQLDataRoot and found that they had a drive M earlier, which is replaced by drive O now. In the registry there were having reference to old drive and hence the problem.

SQL SERVER - Unable to Bring SQL Online - DoSQLDataRootApplyACL : Failed to Create Directory Tree at SQLDataRoot DoSQLDataRootApplyACL-01-800x373

Once they changed the value in registry to correct location, failover worked like a charm.

If it still doesn’t work for you, generate cluster log again and you should find some more folders errors. Check for the reference in registry key and you should be able to fix it.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Unable to Bring SQL Online – DoSQLDataRootApplyACL : Failed to Create Directory Tree at SQLDataRoot


SQL : How to Find Unused Indexes details

$
0
0

The Dynamic Management View (DMV) named sys.dm_db_index_usage_stats that track  index usage details of the database. This DMV gives an information about an index which is being updated but not used in any seeks, scan or lookup operations.

The below query list

  1. Table name
  2. Index name
  3. No of Rows
  4. Size of Index
  5. Type of Index
  6. Drop SQL statement

Index dropping is done at your risk. Validate.. the data before dropping any information from the database.

select object_name(i.object_idas ObjectNamei.name as [Unused Index],MAX(p.rowsRows 
,8 * SUM(a.used_pagesAS 'Indexsize(KB)'case  
    when i.type = 0 then 'Heap'  
    when i.type1 then 'clustered' 
    when i.type=2 then 'Non-clustered'   
    when i.type=3 then 'XML'   
    when i.type=4 then 'Spatial'  
    when i.type=5 then 'Clustered xVelocity memory optimized columnstore index'   
    when i.type=6 then 'Nonclustered columnstore index'  
end index_type'DROP INDEX ' + i.name + ' ON ' + object_name(i.object_id'Drop Statement' 
from sys.indexes i 
left join sys.dm_db_index_usage_stats s on s.object_id = i.object_id 
     and i.index_id = s.index_id 
     and s.database_id = db_id() 
JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id 
JOIN sys.allocation_units AS a ON a.container_id = p.partition_id 
where objectproperty(i.object_id'IsIndexable') = 1 
AND objectproperty(i.object_id'IsIndexed') = 1 
and s.index_id is null -- and dm_db_index_usage_stats has no reference to this index 
or (s.user_updates > 0 and s.user_seeks = 0 and s.user_scans = 0 and s.user_lookups = 0)-- index is being updated, but not used by seeks/scans/lookups 
GROUP BY object_name(i.object_id) ,i.name,i.type 
order by object_name(i.object_idasc

 


SQL SERVER – Creating Clustered ColumnStore with InMemory OLTP Tables

$
0
0

When SQL Server 2016 was released, there were a number of enhancements that were discussed around how InMemory OLTP removed which were as limitations. I personally saw some real boot to some of the capabilities coming into InMemory OLTP especially around the concept called as Operational Analytics that Microsoft calls in this release. I am going to talk about this concept in my SQLPass session later this month in October but here is a teaser for what I learnt from this concept.

In Memory OLTP when it was first introduced was an awesome capability for a latch free environment and fastest inserts without locking the table. This optimistic concurrency model was useful in specific areas. The use of clustered Columnstore inside SQL Server was the ability to store data in columnar format inside SQL Server. Though these two capabilities existed, there was no mashup of these features till SQL Server 2016.

Now inside SQL Server we have the ability to use a Clustered ColumnStore Index on top of an InMemory OLTP table. A typical example of the same is shown below.

SQL SERVER - Creating Clustered ColumnStore with InMemory OLTP Tables inmemory-cci-01-800x430

Here is the script I used in the above image for your reference.

CREATE TABLE tbl_my_InMemory_CCI (  
    my_Identifier INT NOT NULL PRIMARY KEY NONCLUSTERED,  
    Account NVARCHAR (100),  
    AccountName NVARCHAR(50),  
    ProductID INT,
	Quantity INT,  
    INDEX account_Prod_CCI CLUSTERED COLUMNSTORE  
    )  
    WITH (MEMORY_OPTIMIZED = ON);  
GO

Make sure you run this on a database that has InMemory capability turned on. If you try to run it on a system database, you are likely to get an error like:

Msg 41326, Level 16, State 1, Line 1
Memory optimized tables cannot be created in system databases.

I am sure you will be aware of this, but since I learnt this – thought this was worth a share. Do let me know if you are using any of the features of InMemory or ColumnStore inside SQL Server? What is your use case for the same? Do let me know via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Creating Clustered ColumnStore with InMemory OLTP Tables

SQL SERVER – Unable to Attach Database – File Activation Failure – The Log Cannot be Rebuilt

$
0
0

SQL SERVER - Unable to Attach Database - File Activation Failure - The Log Cannot be Rebuilt log-rebuilt-01 Once I was in a situation where there was a disaster faced by my client. They lost the drive which was having transaction log file. Since they did not have effective monitoring, they realized that the backup jobs were failing. So essentially they only have MDF files for user databases and they were left with option to rebuild the log. Let us learn about how file activation failure can create an interesting error.

They tried below command.

CREATE DATABASE UserDB
ON (FILENAME = 'E:\DATA\UserDB.mdf')
FOR ATTACH_REBUILD_LOG

But, this was the error received while attaching the database.

File activation failure. The physical file name “E:\LOG\UserDB_log.ldf” may be incorrect.
The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure.
Msg 1813, Level 16, State 2, Line 1
Could not open new database ‘UserDB’. CREATE DATABASE is aborted.

The error is very clear and interesting. So, I thought of reproducing it

CREATE DATABASE SQLAuthority
GO
USE SQLAuthority
GO
CREATE TABLE Foo (bar INT)
GO
BEGIN TRANSACTION
INSERT INTO Foo VALUES (1)

Yes, there is no rollback transaction because I wanted to leave open transaction, as mentioned in the error message. Once done, I stopped SQL Server, renamed MDF and LDF file for the database and started the SQL Server service. As expected, the database came to “Recovery Pending” State.

Here was the cause in ERRORLOG


2016-09-29 00:31:55.86 spid21s Starting up database ‘SQLAuthority’.
2016-09-29 00:31:55.87 spid21s Error: 17204, Severity: 16, State: 1.
2016-09-29 00:31:55.87 spid21s FCB::Open failed: Could not open file C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority.mdf for file number 1. OS error: 2(The system cannot find the file specified.).
2016-09-29 00:31:55.87 spid21s Error: 5120, Severity: 16, State: 101.
2016-09-29 00:31:55.87 spid21s Unable to open the physical file “C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority.mdf”. Operating system error 2: “2(The system cannot find the file specified.)”.
2016-09-29 00:31:55.87 spid20s [INFO] HkHostDbCtxt::Initialize(): Database ID: [4] ‘msdb’. XTP Engine version is 0.0.
2016-09-29 00:31:55.87 spid21s Error: 17207, Severity: 16, State: 1.
2016-09-29 00:31:55.87 spid21s FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority_log.ldf’. Diagnose and correct the operating system error, and retry the operation.
2016-09-29 00:31:55.87 spid21s File activation failure. The physical file name “C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority_log.ldf” may be incorrect.

At this point, since I wanted to reproduce client situation, I dropped the database so that I should try to attach MDF file. I renamed file to “SQLAuthority_original.mdf’”

CREATE DATABASE SQLAuthority ON (FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority_original.mdf')
FOR ATTACH
CREATE DATABASE SQLAuthority ON (FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority_original.mdf')
FOR ATTACH_REBUILD_LOG

Both commands failed with the same error


File activation failure. The physical file name “C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority_log.ldf” may be incorrect.
The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure.
Msg 1813, Level 16, State 2, Line 1
Could not open new database ‘SQLAuthority’. CREATE DATABASE is aborted.

SOLUTION/WORKAROUND

There is an undocumented option called as ATTACH_FORCE_REBUILD_LOG. Here is the command which worked in my lab.

CREATE DATABASE SQLAuthority ON (FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority_original.mdf')
FOR ATTACH_FORCE_REBUILD_LOG

Here was the message:

File activation failure. The physical file name “C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority_log.ldf” may be incorrect.
New log file ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\SQLAuthority_log.ldf’ was created.

Restoring from backup is always a best solution and above is the last resort. Have you even been in such situation?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Unable to Attach Database – File Activation Failure – The Log Cannot be Rebuilt

SQL SERVER – Cannot Show Requested Dialog – Property Size is Not Available for Database

$
0
0

Freelancing gives me a lot of opportunity to see issues which I have not seen earlier. Learning never stops for me and I love sharing what I learn every day. Let us learn about how to fix the error Property Size is Not Available for Database.

One of my clients reported that they are seeing errors opening tempdb database properties.

SQL SERVER - Cannot Show Requested Dialog - Property Size is Not Available for Database tempdb-er-01

Property Size is not available for Database ‘[tempdb]’. This property may not exist for this object, or may not be retrievable due to insufficient access rights. (Microsoft.SqlServer.Smo)

When I searched for similar error on internet and many posts pointed about owner of the database. So, I ran below

sp_changedbowner 'sa'

But since it was tempdb database, I received below message.

Msg 15109, Level 16, State 1, Line 1
Cannot change the owner of the master, model, tempdb or distribution database.

I noticed that when I open property, it takes some time. We looked into sys.dm_exec_requests and found that wait for PAGEIOLATCH on TempDB database. We checked the event viewer and there were disk related errors for the drive contains the TempDB database.

SOLUTION/WORKAROUND

If you are getting same error, then check database ownership first. If the owner is set correctly, check the health of the database by running DBCC CHEKDB on the database. In my case, it was the storage which was holding TempDB files. Windows team confirmed that storage was having corruption and we were not able to read/write to TempDB. We stopped SQL, deleted TempDB files and started SQL Server and as expected they were recreated. Later, we moved them to new drive and replaced the bad drive. Luckily this client was having separate drive for TempDB and nothing else was impacted.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Cannot Show Requested Dialog – Property Size is Not Available for Database

SQL SERVER – Fix Error Msg 10794, Level 16 – The operation ‘CREATE INDEX’ is not supported with memory optimized tables.

$
0
0

Whenever I write something as a new concept, I do see people reading the blog do give it a try. Many a times I get queries that the script didn’t work for them after they learnt the concepts about memory optimized tables. The blog I wrote on how In Memory OLTP was supporting ColumnStore – read it: SQL SERVER – Creating Clustered ColumnStore with InMemory OLTP Tables

After reading this blog, I saw a note on my Inbox that the feature was not actually working as expected. I was surprised that he wrote back. I asked what was the error to get more details. He was getting an Error 10794. And immediately I wanted to see the complete error because offlate I have seen this the error messages from Microsoft are elaborate in general, and I was not proved wrong.

Msg 10794, Level 16, State 13, Line 15
The operation ‘CREATE INDEX’ is not supported with memory optimized tables.

Now, as soon as the message was sent, I understood what he was doing. To replicate this error, I wrote the following script for reference:

--Create the table
CREATE TABLE tbl_my_InMemory (
	my_Identifier INT PRIMARY KEY NONCLUSTERED 
        HASH WITH (BUCKET_COUNT = 100000),  
    Account NVARCHAR (100),  
    AccountName NVARCHAR(50),  
    ProductID INT,
	Quantity INT  
)WITH
(
    MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY
);
GO
-- Create a Clustered ColumnStore Index
CREATE CLUSTERED COLUMNSTORE INDEX CCI_tbl_my_InMemory 
ON tbl_my_InMemory

SQL SERVER - Fix Error Msg 10794, Level 16 - The operation 'CREATE INDEX' is not supported with memory optimized tables. error-cci-inmemory-800x264

As you can see clearly, based on the previous blog, the Clustered ColumnStore Index was created inline. Currently in this release of SQL Server, Microsoft has not given the capability to add the Clustered ColumnStore Index after the table has been created.

I am sure at this moment, there are some restrictions of use, but there is a lot to learn from each other too. Do share anything you face when working with these features. Would love to blog them too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Fix Error Msg 10794, Level 16 – The operation ‘CREATE INDEX’ is not supported with memory optimized tables.

SQL SERVER – Added New Node in Windows Cluster and AlwaysOn Availability Databases Stopped Working

$
0
0

Almost all the time, whenever there is a wizard, it’s a human habit to go with the defaults and finally click finish. Once of my client sent below email to me. In this blog post we are going to learn about Added New Node in Windows Cluster and AlwaysOn Availability Databases Stopped Working.

Hi Pinal,
We are trying to add new node to the AlwaysOn Availability Group and for that we must add new node to Windows cluster. Before doing this in production, we are trying to our test environment and we ran into issues. We noticed that as soon as node is added in windows, our databases which were part of an availability group went to not synchronizing state. Later I noticed that local disks were added to the cluster under “available storage”.

Have you seen this issue? What is wrong with our setup?

Thanks!

I asked for any error in event log and they shared below.

Log Name: System
Source: Microsoft-Windows-FailoverClustering
Event ID: 1069
Task Category: Resource Control Manager
Level: Error
Description: Cluster resource ‘Cluster Disk 2’ of type ‘Physical Disk’ in clustered role ‘Available Storage’ failed. The error code was ‘0x1’ (‘Incorrect function.’)

I told them that they must have followed the wizard and must have forgotten to “uncheck” the highlighted checkbox.

SQL SERVER - Added New Node in Windows Cluster and AlwaysOn Availability Databases Stopped Working add-node-disk-01-800x547

This is the default setting and has caused issues for many, including me during the demo. They also confirmed the same. What is the solution now?

SOLUTION/WORKAROUND

To work around this problem, we must remove the disk resource from Failover Cluster Manager. Once done, we need to bring these drives online in Disk Management once they are removed from Failover Cluster Manager.

Have you run into the same issue? Anywhere default setting in the wizard has caused the problem?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Added New Node in Windows Cluster and AlwaysOn Availability Databases Stopped Working

Cloud Consulting Pre-Engagement Questionnaire – A Bookmark

$
0
0

The cloud phenomenon is no more a dream for many, but a reality for several customers I have been working with. Many of my customers are looking for guidance and from time to time I get an opportunity to have some deep constructive discussions. These discussions can range from should I be going to cloud or not, is this app cloud ready, can we do a lift-shift of this application directly to some cloud player etc. Though each of these discussions lead from one point to another, there is a lot of profiling I do with customers. In this blog, I want to talk about a conversation I had with one of my customers who was evaluating cloud for one of his applications. Let us see a Cloud Consulting Pre-Engagement Questionnaire.

You should BOOKMARK this blog post for future reference.

Cloud Consulting Pre-Engagement Questionnaire - A Bookmark cloudconsulting-800x670

Though the engagement was a short call, I had more questions to ask than answers. End of the session of one hour, the customer knew he had a lot of homework to do before coming to me for details. So, what are some of the considerations one needs to be aware when planning to migrate to cloud based deployment.

Understanding User base:

  • Can you define the roles of different users in the organization?
  • What is the number of users and users per role?
  • What is the number of users in the scope of the Solution under question?
  • What is the projected growth of the user base in the next three to five years?

Understanding application functionality:

  • What current applications perform the business functions of the Solution?
  • How do the applications assist in the business processes?
  • What functions are required for the business processes?
  • What functions are required by the different user groups?
  • What is the change management process in place for the messaging environment?
  • Are there documented governance and archiving policies for the organization?

Applications Availability / Backup requirements:

  • What are the current service level agreements (SLAs)?
  • What are the high availability requirements?
  • What are the business continuity or site availability requirements?
  • What is the current recovery point objective (RPO)?
  • What is the current recovery time objective (RTO)?

Double clicked Application Architecture:

  • What is the current state application environment?
  • What is each application’s function in relation to the business processes the Solution supports?
  • What monitoring solution is being used?
  • Does the current monitoring solution support the Solution to be deployed?

Knowing the Migration and Governance needs:

  • What are the specific information requirements?
  • Are there current data transformations?
  • Are there any data sets that need cleansing?
  • What data management systems are in place?

Technology Architecture needs:

  • What is the current server administration model (centralized, decentralized, or other)?
  • Who is currently managing the current infrastructure (internal, outsourced, centralized, decentralized, helpdesk, etc.)?
  • How is administration performed (scripted, fully automated, native tools etc.)?
  • Are there additional administrative tasks for which you require delegation?
  • Are you planning to centralize or consolidate the Solution?
  • In how many locations do you plan to deploy the Solution?
  • What are the hardware needs and existing servers and components?
  • What are the attributes and interactions of the hardware in relation to the business applications?
  • What are the existing hardware components for each application layer?
  • What is the overall topology of your network?
  • Choose an appropriate diagram and delete others.
  • What is the number of geographic sites?
  • What is the user distribution by region or site?
  • Can you provide link speed, utilization, latency, and available bandwidth between central and remote sites (if applicable)?
  • Which sites provide Internet connectivity for external user access?

By this time we were to end of the call. But I said, there are security needs, network needs, Directory understanding, Authentication topology etc. were missed out. All I can say the end of this session, the IT manager was confident on what needs to be done.

Have you done such assessments for your Cloud Consulting engagements? Do you think these are relevant and will help you? What are the questions you will add to the above? I must admit, only some that come to my mind have been listed above.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on Cloud Consulting Pre-Engagement Questionnaire – A Bookmark


SQL SERVER – Unable to Bring Resource Online – Error – Could Not Find Any IP Address that this SQL Server Instance Depends Upon

$
0
0

When it rains, it pours. In the last few days last few contacts from my customers were for cluster related issues. Once again, SQL Server resource was not coming online in failover cluster manager. I asked them to check if ERRORLOG is being generated and if yes, share the content. SQL SERVER – Where is ERRORLOG? Various Ways to Find ERRORLOG Location. Let us learn how to fix Unable to Bring Resource Online.

Here are the messaged reported in ERRORLOG.

2016-10-27 08:27:09.71 spid10s Error: 26054, Severity: 16, State: 1.
2016-10-27 08:27:09.71 spid10s Could not find any IP address that this SQL Server instance depends upon. Make sure that the cluster service is running, that the dependency relationship between SQL Server and Network Name resources is correct, and that the IP addresses on which this SQL Server instance depends are available. Error code: 0x103.
2016-10-27 06:28:11.72 spid10s Error: 17182, Severity: 16, State: 1.
2016-10-27 06:28:11.72 spid10s TDSSNIClient initialization failed with error 0x103, status code 0xa. Reason: Unable to initialize the TCP/IP listener. No more data is available.
2016-10-27 06:28:11.72 spid10s Error: 17182, Severity: 16, State: 1.
2016-10-27 06:28:11.72 spid10s TDSSNIClient initialization failed with error 0x103, status code 0x1. Reason: Initialization failed with an infrastructure error. Check for previous errors. No more data is available.
2016-10-27 06:28:11.73 spid10s Error: 17826, Severity: 18, State: 3.
2016-10-27 06:28:11.73 spid10s Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log.
2016-10-27 06:28:11.74 spid10s Error: 17120, Severity: 16, State: 1.
2016-10-27 06:28:11.74 spid10s SQL Server could not spawn FRunCommunicationsManager thread. Check the SQL Server error log and the Windows event logs for information about possible related problems.

I asked them the change which they made in this cluster and the informed that they changed virtual server name of SQL Server.

SOLUTION/WORKAROUND

Since resources were recreated and dependencies were not set properly. Here is the dependency tree.

  • SQL Server > Depends on All Disks and Network Name.
  • Network Name > Depends on IP Address.
  • IP Address > Depends on None.
  • Disks > Depends on None. If there are mount points, then they would have dependency on disk.
  • SQL Server Agent > Depends on SQL Server.

The easiest way to see dependency is by using inbuilt tools of the Cluster Manager as shown below.

SQL SERVER - Unable to Bring Resource Online - Error - Could Not Find Any IP Address that this SQL Server Instance Depends Upon Dependency-800x400

Once dependencies were corrected, SQL Server was able to come online.

Have you faced a similar error with some other solution?

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Unable to Bring Resource Online – Error – Could Not Find Any IP Address that this SQL Server Instance Depends Upon

SQL SERVER – SQLPASS 2016 – Feedback and Rating – Kick Start! SQL Server 2016 Performance Tips and Tricks

$
0
0

Earlier this year, I presented at SQLPASS 2016, Seattle on SQL Server 2016 and I am very happy to see the feedback of my sessions. I have presented two sessions and I will write feedback of one session in this blog post and another one later this week. my session was about SQL Server 2016 Performance Tips and Tricks.

Session Title:

Kick Start! SQL Server 2016 Performance Tips and Tricks

Session Abstract:

Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2014 / 2016 has introduced many new features. In this 75 minute session we will be learning quite a few of the new features of SQL Server 2014 / 2016. Here is the glimpse of the features we will cover in this session.

  • Live plans for long running queries
  • Transaction durability and its impact on queries
  • New cardinality estimate for optimal performance
  • In-memory OLTP optimization of superior query performance
  • Columnstore indexes and performance tuning
  • Query Store

This 75 minutes will be the most productive time for any DBA and Developer, who wants to quickly jump start with SQL Server 2016 and its new features.

SQL SERVER - SQLPASS 2016 - Feedback and Rating - Kick Start! SQL Server 2016 Performance Tips and Tricks sqlpass2016_2-800x600

Session Feedback:

All feedback is on the scale of 3 (where 3 is Excellent)

Attendee ratings: (416 attendees, 57 responses)

Overall session – Did the title, abstract, and session level align to what was presented? 2.93 (out of 3)
Session content – Was the content useful and relevant? 2.93 (out of 3)
Presentation – Was the speaker articulate, prepared, and knowledgeable on the subject matter? 2.96 (out of 3)
Promotions – Was the session free from blatant sales material or self-promotion such as blogs, company, or services? 2.98 (out of 3)

SQL SERVER - SQLPASS 2016 - Feedback and Rating - Kick Start! SQL Server 2016 Performance Tips and Tricks sqlpass2016-feedback-800x321

Individual Comments:

  • Loved it.
  • One of the best speakers
  • Full of energy and fun! Thank you!
  • Pinal was a perfect post-lunch presenter. So much energy!!!
  • Great speaker. Not only was Pinal informative, but he is enjoyable to watch.
  • Energy passion humor and good content
  • In 20 years of Sql Server classes and presentations, user groups and summits this is the best session I have ever seen. Pinal is a hilarious, engaging, passionate genius!
  • Glad I attended! Pinal’s humor and the use of practical examples made this a well spent time.
  • The ABSOLUTE BEST session!!! I am so glad I attended. Really wish he’d taught ALL my other sessions.
  • Very good and entertaining
  • This was my first session with Pinal. Very lively entertaining presentation. I plan to look into in-memory oltp in conjunction with columnstore indexes
  • “Pinal, did a great job at keeping the audience awake and engaged”
  • Good session
  • Engaging and dynamic
  • Very little useful information. A colorful speaker but session was of very limiting value. Less comedy more content.
  • The timing was short it felt rushed. I think you need a longer time slot to present.
  • Got to attend the last 30 minutes. I enjoyed Pinal’s presentation (as usual) and on the fly troubleshooting query plan. Full house as expected!
  • Stellar presentation-will attend all of his presentations in future.
  • Very good speaker and keeps the audience entertained while providing useful information
  • Funny and able to capture audience attention by engaging discussion throughout the session. I expected to get more information from the session, though, this may be due to time constraints
  • Amazing speaker and he gives so much back to community. Thanks Pinal for amazing session
  • Great presentation and good examples. Good information to take back and use for upgrade and even checking current system. Also realized after that Pinal Dave is responsible for helping me learn TSQL with the joes 2 pros books. Thank you
  • Awesome, great info.
  • Pinal was fantastic! Best session today for sure. I learned and had fun at the same time. Thanks for taking selfies with all of us!!! Peace brother!
  • Very enthusiastic speaker
  • Perfect session and Speaker did perfect job. I really like it!!!
  • Been reading Pinal’s blog for a long time. Seeing him present was a great experience. Pinal is one of the best speakers I have ever listened to. thanks
  • Great Session!
  • Pinal’s session was perfect for after lunch. His humorous, engaging presentation style kept things lively as he delivered relevant content.
  • Educational and entertaining
  • I liked Pinal Dave a lot. I thought he was funny and I really appreciate his concise answers that I find in web searches. I thought this session was too short on substance in that I think only 2 tip topics were presented.
  • Pinal has fantastic energy. Does great interacting and involving the audience.
  • This was by far the best session i attended at the conference. I have been following Dave Pinal’s blog for sometime and I was very happy to be able to meet him in person and attend this session. He explained things very clearly
  • He was funny, the best presenter
  • One of the best speaker and session. I think in my 10 years career, this is one of the best session ever. I will attend SQLPASS just for Pinal
  • By far one of the most engaging and most prepared speakers. Great interaction with the audience and topic was fantastic. When you consider this was on the last day of the conf and the room was a standing room only
  • Best speaker ever
  • excellent presentation
  • The session was a hit. I know that a lot of English-as-a-first-language speakers/organizers/attendees have a peeve against speakers with foreign accents, but Pinal’s session proves all of them wrong. I am happy that Pinal has been selected to speak
  • Speaker is very interactive and good.
  • So far the best session I have attended in Summit 2016, it was a laughter riot along with the technical flow; speaker know the topic well and understand the audience level the content and demo was so relevant and people enjoyed it really, some (about 20)
  • :-) The session was fast paced and super cool. I laughed so much in the session, Pinal Dave is super funny.
  • Top notch. Equally entertaining and informative … of course …it’s Pinal!
  • A very enjoyable session for Friday afternoon
  • Need a larger room for Pinal!
  • Room was way too hot and should have been a bigger room given the speaker was Pinal Dave.
  • room was a bit small
  • There was a creaking sound on the stage that was a little distracting
  • late arrivals created tight standing area. seat mixed in were free but you’d have to disrupt a row to get a seat.
  • It was great he had a big room. It got packed!!! And everyone was engaged and participated
  • Room was full fir this session
  • The room could have been bigger. Pinal draws a crowd.
  • Probably need a bigger room for this one. Came late and left early because I couldn’t find a seat
  • He may need a bigger room next year.
  • Everything in the room was fine. The screens were clear and Dave spoke nice and clear and I could hear him perfectly.
  • For him might consider larger rooms.
  • Everything was perfect. Except that the engineer has to know a little more about switching to full screen. It is not the presenter’s laptop problem, rather the switch that has to be put on for the slides to show up in full screen.
  • Wow, I wish that room size would have some more space so that all people could have fitted.
  • Perfect accommodations.

I have not published comments which just say “Good”, “Great” or “Amazing”. There were many single worded positive comments.

SQL SERVER - SQLPASS 2016 - Feedback and Rating - Kick Start! SQL Server 2016 Performance Tips and Tricks session1-800x600

My Opinion:

I am extremely happy with the feedback of users. I truly appreciate it that people thought I was informative and funny. It is a huge compliment for me as I spent hours and hours in building this presentation and planning how I will keep audience awake/engaged after lunch on the last day of the event. Lots of people told me afterwards that they were not able to get into the room as it was already standing room in a few minutes of the session started. I hope to find a way to make this presentation available for everyone who missed it.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – SQLPASS 2016 – Feedback and Rating – Kick Start! SQL Server 2016 Performance Tips and Tricks

SQL SERVER – Back to Basics – What is Azure?

$
0
0

Recently I was at a community session talking to a bunch of computer science students about databases and the type of work I have been doing lately in consulting. I wanted to talk about Performance tuning and why it is critical to understand the basics. Some of the fundamental concepts I have learnt in my college days are still helping me solve some of the complex problems I do at consulting. With that in mind, I started my talk about some of the recent assignments and how I have been troubleshooting performance for both SQL Server on a VM and even SQL Server as a service on Azure. It was about 10 mins into the talk that I realized something was totally wrong, I could see blank faces just like when a professor teaches a complex topic. I have been there and could see the reaction. I said I need a glass of water and paused. I asked one of the students to what was going on – He said, “You mentioned the word ‘Azure’ close to 4-5 times now in the past 3 minutes. Can you tell us what Azure is? Is this a new Database in the industry?”

SQL SERVER - Back to Basics - What is Azure? windows-azure-cloud-800x289

This was the moment of “Ah ha” for me. I told the group, before we can start talking about performance of SQL Server in Azure, we need to have a base understanding of what a cloud service, Azure, is and what it is not. So, the next 15 mins of my lecture were around that. I have brought a snapshot of it here so that I can use the same in future too.

Azure services are made up of networks, drives, servers, etc. all hosted by Microsoft to provide scalable, on-demand services. This architecture comes with many positive attributes, but there are trade-offs when compared to an on premise solution.

Azure has an advantage in cost, deployment time, reduction in administrative and management overhead and Azure can be used to easily scale up or down depending on the needed workload. However, as stated, the underlying infrastructure is hosted. As an administrator, you do not have the same level of control over the individual components and there is less opportunity for deeper performance tuning operations as there are four on premise servers. Azure hardware is commodity hardware meant for scaling out and handling many applications.

Due to these differences, performance tuning requires a slightly different approach. Some of the direct query tuning will be like a standard on-prem SQL Server install, but other aspects such as tuning IO is significantly different as the configuration of disks, HBAs, caching is all taken care of by Azure. Performance in Azure needs to be viewed from more architectural and design view to leverage the best aspects of Azure (cost, low maintenance) while continuing to have great performance with your solution.

Later, as the session expanded, I explained how the world has been moving from Packaged Software to IaaS (Infrastructure as a Service) to PaaS (Platform as a Service) to SaaS (Software as a Service). When I finished explaining these terms and taxonomies, it became clear to what the students wanted. They needed practical knowledge on what the industry trends are and what they need to be aware.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Back to Basics – What is Azure?

SQL SERVER – Database Mirroring Error -The Specified Network Name is No Longer Available

$
0
0

Have you ever seen a situation where SQL Server mirroring is impacted because of external factors? One of my customers reported several of the following errors in the SQL Server ERRORLOG SQL SERVER – Where is ERRORLOG? Various Ways to Find ERRORLOG Location. Let us learn about how to fix database mirroring error.

SQL SERVER - Database Mirroring Error -The Specified Network Name is No Longer Available fixmirroringerror

2016-11-10 07:05:27.90 Server Error: 1474, Severity: 16, State: 1.
2016-11-10 07:05:27.90 Server Database mirroring connection error 4 ’64(The specified network name is no longer available.)’ for ‘TCP://SQL1.sqlauthority.com:5022’.
2016-11-10 07:05:29.37 spid27s Error: 1474, Severity: 16, State: 1.
2016-11-10 07:05:29.37 spid27s Database mirroring connection error 4 ‘An error occurred while receiving data: ‘121(The semaphore timeout period has expired.)’.’ for ‘TCP://SQL2.sqlauthority.com:5022’.

As we can see above 1474 is a generic error. That can have many different OS errors. In above errors, it has come due to

  1. OS Error 64 = The specified network name is no longer available.
  2. OS Error 121 =The semaphore timeout period has expired.

Sometimes we can see error number, but not the message. Here is what you can do: I use net helpmsg command and pass the error number.

SQL SERVER - Database Mirroring Error -The Specified Network Name is No Longer Available dbm-err-01

As a DBA, we need to look at more data to see and check if we can figure out what’s going on with the machine.  In this customer’s scenario.

I reviewed Upon reviewing the System event log many networking warnings are being reported:

11/10/2016 07:05:27 AM Warning 27 e1rexpress HP Ethernet 1Gb 2-port 361T Adapter #3 Network link is disconnected.
11/10/2016 07:03:34 AM Warning 461 CPQTeamMP Team ID: 0 Aggregation ID: 1 Team Member ID: 0 PROBLEM: 802.3ad link aggregation (LACP) has failed. ACTION: Ensure all ports are connected to LACP-aware devices.
11/10/2016 07:03:10 AM Warning 27 e1rexpress HP Ethernet 1Gb 2-port 361T Adapter #3 Network link is disconnected.
11/10/2016 07:03:11 AM Warning 27 e1rexpress HP Ethernet 1Gb 2-port 361T Adapter #2 Network link is disconnected.

Above are at the same time when SQL is reporting errors. Error in the system event logs is indicating that there is something going on with the network cards which needs to be addressed. I informed my client that I am not familiar with this error but I am sure your hardware folks would be able to tell us what this is and how to fix it, if it is a problem.

When we contacted the hardware team for this customer they informed that NIC card drivers are very old and not compatible. As soon as NIC card drivers were upgraded, issues were resolved. Let me know if you have faced any similar error about – database mirroring error.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Database Mirroring Error -The Specified Network Name is No Longer Available

SQL SERVER – Rule “Windows Management Instrumentation (WMI) Service” failed

$
0
0

SQL SERVER - Rule "Windows Management Instrumentation (WMI) Service" failed wmi256 When I visit my client site for doing performance tuning consulting, I sometimes get trapped with some unrelated issue. Here is one of the situation where they were upgrading from SQL 2008 R2 to SQL 2014 on a two node cluster. During upgrade two rules were failing. In this blog post we will learn about how to fix when rule “Windows Management Instrumentation (WMI) Service” fails.

Failing Rule 1

Rule “Windows Management Instrumentation (WMI) service” failed.
The WMI service is not running on the cluster node.

Failing Rule 2

Rule “Not clustered or the cluster service is up and online.” failed.
The machine is clustered, but the cluster is not online or cannot be accessed from one of its nodes. To continue determine why the cluster is not online and rerun setup instead of rerunning the rule since the rule can no longer detect a cluster environment correctly.

As per MSDN documentation, I found Detail.txt file and here was the stack when error occurred

(06) 2016-11-19 19:59:09 Slp: Error while retrieving the cluster WMI namespace on machine NODE03; Error is as follows System.Management.ManagementException: Invalid class
at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode)
at System.Management.ManagementObject.Initialize(Boolean getObject)
at System.Management.ManagementBaseObject.get_Properties()
at System.Management.ManagementBaseObject.GetPropertyValue(String propertyName)
at Microsoft.SqlServer.Configuration.WMIInterop.Cluster.get_Name()
at Microsoft.SqlServer.Configuration.Cluster.Rules.WMIServiceFacet.ClusterWmiCheck()

I tried following test for WMI based on my internet search:

  • Start > Run > WBEMTest
  • The “Windows Management Instrumentation Tester” will launch
  • Select Connect
  • Namespace: Root\MSCluster
  • Select Connect

If we see more options available, it means you are connected and WMI is working. In my case, I received below error

Number: 0x80041010
Facility: WMI
Description: Invalid Class

WORKAROUND/SOLUTION

Since there was invalid class for cluster this means that cluster namespace missing in WMI. So, we need to compile the .mof file for cluster. Here is the magical command which fixed all above errors.

Mofcomp.exe ClusWMI.mof

Once above command had run it added the namespace back.

Did you see same error and found different solution? Please share with other readers.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Rule “Windows Management Instrumentation (WMI) Service” failed

Viewing all 552 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>