…upgrade step ‘ISServer_upgrade.sql’ encountered error 6528, state 1, severity 16.

Hello!

I was patching a SQL Server 2016 development cluster . The final installation report did not show any error and all was ‘green’. But the SQL group was in the ‘failed’ state, because SQL Service and hence the SQL Agent Service were in the failed state. Failing over SQL group to another node did not help. SQL services remained in the failed state.

I checked cluster ‘critical events’ of SQL Service. The errors recorded there were not much helpful. So I checked Server’s event log in the ‘Event Viewer’. It had following errors logged in it –

Error 1-

Event ID: 912
Description:
Script level upgrade for database ‘master’ failed because upgrade step ‘ISServer_upgrade.sql’ encountered error 6528, state 1, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.

Error 2-

Event ID: 3417
Description:
Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database…

Error 3-

Event ID: 17063
Description:
Error: 50000 Severity: 16 State: 127 Cannot drop the assembly ‘ISSERVER’, because it does not exist or you do not have permission.

Here Error #1 & #3 indicated that something went wrong with SQL Server Integration Service (SSIS). But SSIS was not installed on the server. Based on Error #2, I was thinking of rebuilding ‘master’ database. But decided to take a ‘second opinion’ and contacted my teammates (Rajesh and Santosh).

I had taken backup of the all the system and user databases before I started patching. ‘SSISDB’ was there on the server; which indicated that ‘SSIS’ was installed on the server, but then it might have been removed at some point.

We decided to install only SSIS again. Before that we dropped ‘SSISDB’ as we had its backup. After installing SSIS, we added SQL instance name in the file ‘MsDtsSrvr’.

This file can be found in the path –
C:\Program Files\Microsoft SQL Server\130\DTS\Binn

We added SQL instance name in it as shown below –

<?xml version=”1.0″ encoding=”utf-8″?>
<DtsServiceConfiguration xmlns:xsd=”http://www.w3.org/2001/XMLSchema&#8221; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”&gt;
<StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown>
<TopLevelFolders>
<Folder xsi:type=”SqlServerFolder”>
<Name>MSDB</Name>
<ServerName>Servername\Instancename</ServerName>
</Folder>
<Folder xsi:type=”FileSystemFolder”>
<Name>File System</Name>
<StorePath>..\Packages</StorePath>
</Folder>
</TopLevelFolders>
</DtsServiceConfiguration>

The above mentioned steps solved the issue and we could bring SQL services ‘online’ back. Failover \ Failback worked fine as well.

Thanks very much, Rajesh and Santosh!!

 

 

 

 

Linked Server – ‘Remote Scan’ and ‘Remote Query’

Recently I was working on a performance issue involving linked server. It took me quite long to resolve the issue, but finally I could fix it.

Here is some background… There were two servers, ‘A’ – Main Server and ‘B’ – An archive server for Server A. On server B, a SP was scheduled through a job. SP was written to perform following steps –

  • Based on date parameter, fetch data from Server A (through linked server, because job was scheduled on Server B) and save it in a temp table. One of the columns in the fetched data was a unique key, say ‘UniID’, which will be used in next steps.
  • Based on ‘UniID’, insert the data from temp table into archival tables on Server B.
  • Now DELETE the data from Server A (again through linked server), because it is now moved to archival tables i.e. to Server B.

The first two steps worked fine. Problem was at the third step i.e. DELETE through linked server. Job was either timing out or was running infinitely when DELETE was being executed.

DELETE was written using command similar to following one –

DELETE FROM [LinkedSrvA].[LinkedSrvA_DB].[Schema].[LinkedSrvA_Table] WHERE UniID IN (SELECT UniID from #TempTable)

When I checked estimated query execution plan, I found that most of the work was done at ‘Remote Scan’ operator.

RemoteScan1a

Now I knew where the problem was. I had to remove this operator to fix the issue. I had seldom come across this operator. So searched more about it. I found that this operator ‘reads all the data from remote object’ and while doing this can cause performance issue. I also found that performance improves if ‘Remote Scan’ can be replaced by ‘Remote Query’. Remote Query ‘submits a query to a remote destination for execution and then returns all rows produced by remote destination’.
To check if I can get ‘Remote Query’,  I wrote above DELETE query in a different way.

/* declaring variables */
Declare @TempMinUniID int  
Declare @TempMaxUniID int

/* Assigning variable values */
SET @TempMinUniID =(SELECT MIN(UniID)  from #TempTable) 
SET @TempMaxUniID = (SELECT MAX(UniID)  from #TempTable)

/* DELETE in a different way */
DELETE FROM [LinkedSrvA].[LinkedSrvA_DB].[Schema].[LinkedSrvA_Table] WHERE UniID >=@TempMinUniID and UniID<=@TempMaxUniID

Yes, there are few more lines in above code; but now query was using ‘Remote Query’ operator and it worked successfully. I got rid of ‘Remote Scan’!
Remotequery1a
Following is a screen shot of operator properties. You can compare the parameters (CPU, Operator and Subtree costs) and see why ‘Remote Query’ performs much better. Also take a look at the bottom of the screen shots. ‘Remote Scan’ scans Remote Object, whereas ‘Remote Query’ executes the query.

RemoteScanAndQuery
Though the problem is solved now, still need to understand how writing query in a different way results in different (and efficient) query plan. Thanks!

 

New in SSMS – SQL Server Diagnostics

 

*** This feature has been removed from SSMS.***

Hi!
Around couple of weeks ago, I watched a live webcast presented by great Kendra Little @Kendra_Little. It was titled ‘SSMS Secrets’ and was from one of her ‘Dear SQL DBA…’ series. In that episode, Kendra demonstrated a new SSMS feature – SQL Server Diagnostics. I decided to try it out.

I downloaded latest SSMS (v17.1), which was used by Kendra. In SSMS, if you navigate to Help -> About; you’ll get following version information.

1_SSMSAbout
I also downloaded, new SQL server sample database – WideWorldImporters. Our favorite ‘AdventureWorks’ has retired after its long service. Bye bye ‘AdventureWorks’! You’ll be missed!!

I navigated to ‘Tools’ menu in SSMS, but could not find any ‘SQL Server Diagnostic’ option. I had assumed that, it’ll be part of this new SSMS.

2_ToolsOld2

After some searching I found from this MSSQL Tiger Team blog that, we need to download this new feature as SSMS extension. I downloaded the setup .msi file from this link and installed it.
3_diagoneInstall

After the installation, ‘SQL Server Diagnostic’ option was available under ‘Tools’ menu.
4_diagonetoolsSSMS2

SQL Server Dump files
As per SQL Server CAT team MSDN blog, SQL dump file ‘… is a file containing a snapshot of the running process – and parts or all of the memory space of that process. The snapshot also contains the call stack of every thread the process has created.’
And a minidump file is… is a much smaller file that contains the memory for the call stack of all threads, the CPU registers and information about which modules are loaded’.

When you open .mdmp file in Visual Studio, you get a report similar to one depicted in next screen shot.
DumpFilenVStudio

Analyze Dumps 
When you navigate to Tools -> SQL Server Diagnostics -> Analyze Dumps option, you’ll be directed to ‘Diagnostic Analysis’ web page (which opens in SSMS itself) as shown in next screen shot. So make sure you are connected to internet.

5_diagonetoolsSSMS_2

Fill in the required fields and upload the dump file. Once you click ‘Upload’ option, you can see progress in three steps – Upload, Analysis and Recommendations.
7_diagonetoolsSSMS_5

Analysis phase can take some time depending upon size of a dump file. For 12 MB dump file, it took around 9 minutes to complete the Analysis phase. Once recommendation phase is over, you get a report as shown in next screen shot. Usually it’ll be a link to KB article and will suggest a SQL patch which might fix the issue.
You can also report an issue to ‘Microsoft Connect’ with the link available at the bottom of the page.
8_diagonetoolsSSMS_5

The same report will be sent to your email id. You can view upload history under ‘Upload History’ tab.
10_UploadHistory

View Recommendations 
When you navigate to Tools ->SQL Server Diagnostics -> View Recommendations option, you’ll be directed to ‘Latest Cumulative Updates’ web page in SSMS.  Here, you can select options from two drop down menus – ‘Products’ and ‘Features/Tags’. You’ll be presented with download links for CUs\patches depending on the Product-Feature combination you have selected.
ViewRecommendations2

Developer APIs 
When you navigate to Tools -> SQL Server Diagnostics -> Developer APIs option, you’ll be directed to ‘SQL Server Diagnostics API’ in new browser window. On this home page, you’ll find introduction to diagnostic APIs and how to ‘Get Started’ information.

DiagnAPIs

I remember, when working with Customer Support team; Microsoft used to provide a link to upload dump files, so that they can analyze them. With this ‘SQL Server Diagnostic’, user now has a new ‘DIY’ tool. We can now carry out diagnostic on our own, at an initial stage.

Thanks Kendra for introducing us to this useful SSMS feature!!

SQL Server Silent/Unattended installation and Errors I faced.

Hello!

I was installing SQL Server on my new laptop and decided to try out ‘Unattended’ or ‘Silent’ installation. Here are my learning from this exercise.

An ‘.INI’ (INItialization) file is required for silent installation. This is a configuration file which holds installation parameters and their values. To create this file, start SQL installation through GUI and continue till ‘Ready To Install’ screen. You can see path of ‘ConfigurationFile.ini’ at the bottom of the screen. Note the path and cancel the SQL Server setup.

config_ini

We need to change some parameters in this .INI file for silent installation. I began with changing ‘QUIETSIMPLE’ parameter from ‘FALSE’ to ‘TRUE’.

1_a

Then I initiated silent installation from command prompt.

2_a

I received following error message –
The /UIMode setting cannot be used in conjunction with /Q or /QS.

3_a.JPG

I got this error because I had changed parameter ‘/QS’ i.e. ‘QUIETSIMPLE’ to ‘TRUE’. I commented out parameter ‘/UIMode’ to disable it and started silent installation again.

4

I received following error message –
The /IAcceptSQLServerLicenseTerms command line parameter is missing or has not been set to true. It is a required parameter for setup action you are running.

5_a

As mentioned in the error message, this parameter was missing from my .INI file. I added it to file and restarted installation.

6
This time installation started as shown in the next screen shot.

7_a

But after some progress it stopped with following error message –
Missing sa account password. The sa account password is required for SQL Authentication mode.

8_a

 

Since I had selected ‘Mixed Mode’ authentication, ‘sa’ password was needed. I added parameter ‘SAPWD’ to .INI file as shown in next screen shot and restarted the installation.

9_a

This time setup completed without any errors.
Thanks!

Decoding SQL Server Performance – Microsoft Mumbai Tech Community Event

Hello!

I was very excited to see an invitation email in my inbox from Sr. PFE Mahendra Prasad Dubey, announcing another Microsoft Mumbai Tech Community event. It was scheduled on Saturday, 6th May at Microsoft Mumbai office. Since I was in Mumbai for some personal work, I decided not to miss this event.

The event title was ‘Decoding SQL Server Performance’. It was presented by Sr. PFE Mahendra Prasad Dubey and Sr. Consultant Narendra Aangane.

While opening the session, Mahendra asked a question which made audience chuckle. It was – ‘Who is facing SQL Server performance issue?’ 😉 To find performance pain points and to fix them is probably the most challenging task SQL developers and DBAs face. Everyone was eager to know, which new tricks they’ll learn from this session; which will help them solve performance problems.

I took some notes during this session and I am sharing them here with you. Hope you’ll find them useful.

Agenda for the session was –

  • Fundamentals of Performance Tuning
  • Locks and Blocks
  • TEMPDB performance issues
  • Handling cursors
  • Reading query execution plan
  • Query tuning
  • Exploring Extended Events (XE)

Most of the performance issues revolve around following parameters –

  • SQL Configuration
  • IO
  • CPU
  • Memory
  • Query
  • TempDB

Once you zero-in on one of the above parameters, you can dive at more granular level and find the root cause of performance issue.

Mahendra insisted that, your fundamental concepts should be very clear so that you can effectively trouble shoot performance issues. He started with basics of Indexing and different Index types in SQL Server. To explain index architecture visually, Mahendra played his favorite video on Index Architecture from YouTube.

Indexing – Following are some points to remember about indexing

  • Indexes need storage space. Use ‘sp_spaceused’ to track table growth after adding indexes.
  • Use SET STATISTICS IO ON to check no. of logical scans query performs. See if you can reduce them by adding appropriate indexes.
  • In query execution plan, check value for parameter – ‘No. of Executions’. High value indicates performance issue.
  • Do not create all the ‘missing’ indexes suggested by SQL Server. Use your judgement as well. Test indexes before adding them in production environment, because one big inappropriate index can cause serious performance issues.
  • Use query hints only when you have tested them thoroughly and confident about them.
  • Use ‘Filtered Indexes’. This will fetch data which matters to you and faster.
  • We can use ‘Filtered Statistics’ too. More explanation here.
  • You can create indexes on computed columns as well.
  • While rebuilding indexes, rebuild clustered indexes first and then rebuild non-clustered indexes. This reduces time of maintenance job.

Some points discussed about Columnstore indexes were –

  • Columnstore Indexes are useful in Data warehouse scenarios.
  • Compression in Columnstore indexes helps reducing IO.
  • Columnstore achieves better compression (and hence better performance) than rowstore because of repeating data patterns.
  • Because of much higher level of compression, columnstore index occupies considerably less space than rowstore.
  • Columnstore indexes have only ‘scan’ operation. There is no ‘seek’ operation.
  • Some data types do not support columnstore indexes.
  • There were many limitations while using columnstore indexes in previous versions of SQL Server. But in SQL Server 2016, most of these limitations have been removed.
  • SQL Server 2016 introduces concept of ‘Real Time Operational Analytics’ (i.e. OLTP + OLAP) and Columnstore indexes are important component of this concept.

Mahendra and Narendra also demonstrated internal working of SQL Server Statistics with simple examples.

IO Issues –

Bench-marking is important while troubleshooting performance issues, specially related to IO performance. We can use tools like simple Excel spreadsheet or advanced ‘Management Data Warehouse’ (MDW) for performance bench-marking.

‘Activity Monitor’ is a tool, which is often overlooked. This tool can be good starting point for your performance troubleshooting journey.  The top most part of Activity Monitor is graphical representation of following parameters –

  • % Processor Time
  • Waiting Tasks
  • Database I/O
  • Batch requests /sec

Refresh interval can be adjusted in Activity Monitor, as shown in following screen shot.

ActMonitor

SQL Server 2016 introduces a new section – Active Expensive Queries – in Activity Monitor.

ActiveExpensiveQueries2

Following are notable wait stats to watch out while fixing IO issues –

  • ASYNC_IO_COMPLETION
  • LOGMGR
  • PAGEIOLATCH_x
  • WRITELOG
  • IO_COMPLETION

Some other points discussed related IO issues were –

  • Along with ‘Avg. Disk Sec/Read’ and ‘Avg. Disk Sec/Write’, ‘Avg. Disk Sec/Transfer’ is also an important IO counter. It represents average time needed for each read and write.
  • DMV sys.dm_io_pending_io_requests returns a row for each pending I/O request
  • DMV sys.dm_io_virtual_file_stats returns I/O statistics for data and log file.
  • Diskspeed (diskspd.exe) replaces old ‘SQLIO’ utility.

Extended Events (XE) –

In future SQL versions, ‘Extended Events’ will replace SQL Server Profiler trace. Hence Mahendra and Narendra asked the audience to get familiarized with Extended Events as soon as possible. They demonstrated how we can use Extended Events, through GUI and T-SQL, to track SQL events. Extended Events are very lightweight to use and they include over 1000 events. You can view ‘Live Data’ for the events which you have selected. You can find few more details here at BrentOzar.com.

Further Learning –

Mahendra and Narendra discussed following learning resources to enhance technical skills –

  • TechNet Virtual Labs – You can get hands on experience on new features of SQL Server.
  • MCM Videos – These videos teach some important core SQL Server concepts in easy to understand manner.
  • coursera.org and edx.org – Free Online courses in various IT disciplines, not just SQL Server.
  • YouTube – Yes, YouTube is not just for entertainment!

To summarize…

Alike previous ‘Mumbai Tech Community’ events, this session too was a great learning and community experience.  Mahendra and Narendra could not cover all the topics from the agenda due to time constraint. But audience can take a clue from the topics discussed in this session and explore further on their own, with the help of learning resources mentioned earlier. The session concluded with the promise of the next session. Today Mahendra generously shared the presentation slides with the audience.

Thanks Microsoft for SQL Server, for the wonderful venue and for equally wonderful experts like Mahendra and Narendra, who are taking great efforts to pass on their passion for SQL Server to community!!

SSL Certificate error – TDSSNIClient initialization failed with error 0x80092004, status code 0x80…

Hi!

On previous weekend I was working on a task to update SSL certificate on one of the SQL server clusters we support. The old certificate was about to expire in next few days and we were required to import new certificate.

The Process –
The process of importing SSL certificate is described in detail in this MS Support article. Also since it was a cluster, I carried out steps mentioned in the section ‘Enable a certificate for SSL on a SQL Server clustered installation’ of this article.  MSDN article has explained this process of enabling SSL certificate with screen shots. Since we make changes in registry, it requires reboot of a cluster node for changes to take effect.

To summarize, this process includes following steps –

  • Import new certificate.
  • Copy thumbprint value. Make sure to exclude extra spaces and special characters like ‘?’.
  • Update ‘SuperSocketNetLib’ registry with new thumbprint.
  • Failover to another node.
  • Restart current node.
  • Repeat above process for another node.

The Problem –
I followed the above mentioned steps. But after the changes, SQL Service resources could not come ‘online’ and were displayed as ‘failed’ in Failover Cluster Manager. Following error messages were logged in SQL error log –

The server could not load the certificate it needs to initiate an SSL connection. It returned the following error: 0x8009030e. Check certificates to make sure they are valid.

Error: 26014, Severity: 16, State: 1.

Unable to load user-specified certificate [Cert Hash(sha1) “<new cert thumbprint>“]. The server will not accept a connection. You should verify that the certificate is correctly installed.

Error: 17182, Severity: 16, State: 1.

TDSSNIClient initialization failed with error 0x80092004, status code 0x80. Reason: Unable to initialize SSL support. Cannot find object or property.

Error: 17826, Severity: 18, State: 3.

Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log.

Error: 17120, Severity: 16, State: 1.

SQL Server could not spawn FRunCommunicationsManager thread. Check the SQL Server error log and the Windows event logs for information about possible related problems.

I reverted back to old thumbprint and old certificate (which was still valid), and SQL services started successfully. Clearly I was missing something while importing new SSL certificate…

The Missing Piece –
One difference I noticed was that of an icon appearing against certificate name. The old certificate had ‘key’ in its icon, whereas for new certificate it was missing.
1

This difference was related to private key corresponding to the certificate. Double click on a certificate and a details’ window similar to the following one will pop-up.  Notice the private-key icon and message related to it. But still I could not resolve the issue! 😦

2

The Solution –
After spending some more time in futile efforts; I called Ravindra, a senior DBA from our team, for help. He analysed the problem and concluded that the issue was indeed because of missing private key. He explained me that, SQL service account does not have access to the private key. He searched for a way to assign appropriate access to service account and found ‘Certutil’, a command-line program to manage certificates. He executed following command which installed new certificate successfully and we could bring SQL resources online. All green! 🙂

certutil -repairstore my “<thumbprint>

Here parameter ‘my’ indicates ‘Personal’ certificate store and ‘thumbprint’ indicates new certificate thumbprint.

Thanks very much, Ravindra!

(Reference – SSLSupportdesk.com article)

 

Envisioning reality – Microsoft Mumbai Tech Community Event

Hello!

Around two weeks ago, I received an email from Mahendraprasad Dubey (Microsoft Senior Premier Field Engineer) announcing a community event on 17th December. I registered for this event immediately for two reasons. First – I had attended couple of community events in Pune, organized by ‘Pune IT Pro Community’ and led by Mahendra. It was wonderful learning experience with great knowledge sharing. And Second – Venue for this event was Microsoft Mumbai office 🙂

Yesterday I attended this event organized by ‘Microsoft Mumbai Tech Community’ @MumbaiTechGroup. If measured on my both the expectations mentioned earlier, the event was great success  . First – Great learning with amazing community spirit and Second – Stunning Microsoft office, designed with ‘Ocean’ theme right from ceilings to coffee mugs! …Yes, my coffee mug had a picture of a starfish 🙂

The event was titled as ‘Envisioning Reality’ and it was divided into three sessions. This article is about my experience and learning from these sessions.

Session I – IoT – Connecting Everything
iot
This session was presented by Gandhali Samant @s_gandhali  (Microsoft Senior Technology Evangelist). I had attended Microsoft Azure Vidyapeeth Webinar presented by Gandhali before. I was looking forward for this session as it was an opportunity to listen to her in person.

To be honest, I knew very little about IoT before attending this session. I was doubtful about how much I can grasp from this session. But Gandhali started the session right from the definition of IoT and building it on top of it step-by-step; ending with a small live demo of IoT.

Gandhali explained that, though Internet of Things (IoT) is a buzzword today; the concept has been around for past few decades. What makes IoT a ‘next big thing’ is unlimited storage and unlimited computation ‘cloud’ has to offer. This combination will offer business insights like never before.

Following four components make IoT –

Things – This ‘T’ in IoT is has really vast stretch. It includes anything with intelligent chip inside and everything which can possibly connect to internet. In short, everything which is ‘Smart’!

Connectivity – In IoT, connectivity is not limited to mere http. Protocols like ZigBee, MQTT and AMQP power IoT.

Data – All the ‘things’ mentioned above generate data. One can imagine how vast the expanse can be! It can be all sorts of readings (temperature, pressure, humidity, medical parameters etc.) from sensors, images from surveillance cameras, maintenance records from appliances and even activities of cattle. List can go on and on…

Analytics‘Microsoft Azure IoT Suite’ gives you power to analyze data generated from various sources. This in turn offers new business insights and can even predict future. With Microsoft Power BI data visualization tools, one can bring all this data to life.

Azure IoT Hub is at the cente of IoT Suite. IoT Hub supports development on all the platforms and languages. One can start with Azure certified devices as well.  Gartner has predicted that, 25 billion devices will be connected by 2020 generating $7.2 trillion I0T market.

Gandhali also explained Microsoft’s principles for investing in IoT. These are –

  • Provide connectivity to existing and new devices.
  • Facilitate new insights by harnessing power of untapped data.
  • Enable fast solution development.

During this session, I got introduced to following Microsoft IoT products –

  • ‘Windows 10 IoT Core’ is an OS built for IoT.
  • ‘Windows 10 IoT Core Dashboard’ helps to setup and connect Windows 10 IoT core devices.
  • ‘Azure Stream Analytics’ can ingest events from many data sources in real time.
  • ‘Azure Stream Analytics Query Language’ (similar to T-SQL) is used for ‘Streaming Computations’.

It was exciting to watch Gandhali’s IoT demo generating real time visual data in the form of Power BI charts.

Session II – Bring your data to life with Microsoft Power BI
powerbi

The second session was presented by Narendra Angane (Microsoft Senior Consultant). I had attended Narendra’s sessions on SQL Server during Pune community events and I liked them a lot. This time Narendra talked about Power BI.  During this session Mahendra also joined Narendra to explain some of the points.

Power BI is an interactive data visualisation platform from Microsoft, which transforms data into business insights. It offers organization-wide analytics solution, sharable through various dashboards and reports.

Power BI is SaaS and its base version is free of cost. For advanced or ‘Pro’ features you need to pay nominal cost. Power BI Desktop is free to download tool. Power BI reports can be shared in form of .pbix file. Instead of sharing reports through email, we can use ‘Organizational Content Packs’ in Power BI to distribute reports and datasets. SQL server Reporting Services (SSRS) will soon have Power BI features integrated into it.  Microsoft continuously works on improving Power BI and releases its updates each month.

Power BI report development life-cycle can be depicted as
PREPARE –>EXPLORE–>REPORT–>SHARE

During the session, Narendra explained about following cool features of Power BI-

  • Power BI can connect to more than 60 on-premise, cloud and streaming data sources.
  • There are various types and sub-types of charts available to bring data to life.
  • You can ‘play’ scatter chart to see data in motion and how it has progressed over the time. I found this feature really exciting.
  • You can write simple English queries to explore data. Also you can define synonyms to make queries more user friendly.
  • You can defines schedules to refresh data.
  • Use Q&A to explore data using natural language and to build FAQ section.
  • Power BI app is free to download and can be installed on any device. You can access data anywhere, anytime. Narendra demonstrated Power BI charts on his mobile phone.
  • In Power BI, you can connect to OData feed like any other data source.
  • You can restrict access to particular data by defining parameters in Power BI.
  • R scripts can be integrated in Power BI to visualise data. Also we can mix-match R and other visual controls.
  • Power BI reports can be easily distributed through links or can be embedded as iframe tags.
  • Narendra demonstrated a Word-cloud chart. I found this new chart type really wonderful.

During this session, audience asked many questions about Power BI features and Narendra answered all of them with a smiling face.

Session III – SQL Server on Linux
vnext

This third session was presented by Mahendraprasad Dubey (Senior Premier Field Engineer). Mahendra is quite active in Pune and Mumbai Microsoft Tech community and has led many such community events. He was the one who initiated, informed and followed up with other community members about this event.

Since Microsoft announced ‘SQL Server on Linux’ in March this year; there has been tremendous curiosity about it in IT community. Being a SQL Server DBA, I also wanted know more about ‘SQL Server v.Next’. This session answered many of my questions.

Mahendra demonstrated installation of ‘SQL Server v.Next’ on Linux. When we install SQL Server on Windows, we usually navigate through many screens; mentioning various installation parameters. That was not the case with the Linux installation. It was obviously done through command prompt and it finished surprisingly in less than a minute! He also demonstrated how SQL Server Management Studio (SSMS) 17.0 RC1 can be used to connect to SQL Server v.Next instance.

Following are few more points which I learnt about SQL Server v.Next –

  • At present, only SQL Server database engine is available for Linux version. Other features like SQL Agent service etc. are not available. But as the product matures, we can expect new additions.
  • Only SQL authentication is available. Windows authentication is not (yet) available as there is no AD integration.
  • We can restore database backup taken from Windows SQL Server onto Linux SQL Server, and other way round.
  • We cannot change data file paths in SQL Server v.Next due to inherent nature of Linux OS. When you try to do this through SSMS, it gives an error.
  • There are no major changes in how SQL Server works, when installed on Linux.
  • DBAs will need to learn about Linux OS scheduling, as it is different from Windows OS scheduling.

By offering SQL Server on Linux, Microsoft is definitely going to increase its customer base. Also it is an important step strengthening Microsoft and Red Hat hybrid cloud partnership.

Mahendra also demonstrated how we can use Visual Studio efficiently for some SQL Server tasks. One of them was to export SQL Server schema scripts for particular SQL version.

Mahendra conducted his session very interactively, with sprinkle of humour throughout.

A community spirit –
I cannot thank enough to all the presenters for the time and efforts they devoted for this event. All the hard work they had put together and their passion to share the knowledge is really commendable. They made sure audience remained engaged during the sessions. All of them had infectious enthusiasm. It was indeed a great example of camaraderie among Microsoft community.

…How will I survive?
This was the question I was asking myself, when all these new technology changes were unfolding in front of me during this session. So many new things to learn and so much ground to cover…How can I catch-up?  To be honest, I was (and I am) really worried. Coincidentally a paragraph in a book, which I started reading few days ago, has the answer to my question. That book is ‘Deep Work’ by Cal Newport and that paragraph reads–

 ‘We have information economy that’s dependent on complex systems that change rapidly. Some of the computer languages (Jason) Benn learned didn’t exist ten years ago and will likely to be outdated ten years from now. ….To remain valuable in our economy, therefore, you must master the art of quickly learning complicated things. This task requires deep work. If you don’t cultivate this ability, you’re likely to fall behind as technology advances.’

Thanks!

What SQL Server Professionals Can Learn From Writing and Blogging- by Edwin Sarmiento

Hello!

When I decided to start this SQL Server blog, I had all sorts of questions in my mind –

  1. Why start one more blog when there are so many SQL Server blogs out there?
  2. Who is going to read my blog?
  3. Where am I going to post my writings?
  4. When can I find time to write?
  5. How am I going to find topics for the blog?
  6. Can I be regular in my writing?
  7. Do I need to be an expert before posting my writing?
  8. What if I post something incorrect mistakenly?

On 29th November, I attended a webinar titled What SQL Server Professionals Can Learn From Writing and Blogging, by Edwin M Sarmiento @EdwinMSarmiento (MVP and MCM) and I am glad that I got answers to all my questions and much more!

Edwin’s webinar slides included very little text. He shared most of his thoughts through his speech. I took down notes during the webinar. I thought it’ll be good idea to share this learning experience. And hence this blog post!

**Thanks very much Edwin for allowing me to share this!**

To begin with…
Write it down!
 Edwin started with emphasizing importance of writing down or making notes. While working as a database administrator, we learn new things almost daily. That can be about a new SQL Server feature we tested, an option that we didn’t try before or about fixing or trouble shooting an issue. But not necessarily we remember everything. Writing down or documenting the things helps us to remember them. Also these journal entries can develop into ideas for blog posts. (My Q.#5 answered here.) 

So what we learn …?
3 Ms
 – Edwin summarized a core part of webinar with 3Ms.

Mastery – When we decide to write a blog, we gather as much information as possible about that topic. No! One need not to be an expert of that particular topic (Q.#7 answered). During the process of writing, we seek accurate and reliable information. By the time we prepare and post our writing, we certainly gain more knowledge of that topic than we had before. Thus we move slowly towards ‘mastery’.  (Edwin shared how he initially found clustering complicated, but after some learning and efforts he ended up writing series of articles on clustering) (Q. #1 & #2 answered – writing is first for yourself and then for others to read.)

Marketing – Writing and blogging help us to build our ‘brand’. It definitely gives us an edge over others and helps advancing in our professional career. Recruiters don’t just rely on resume. They search your name on internet. The information which appears on the first page of search can be crucial in building your impression. (Edwin displayed search results of two SQL server greats we all know -Brent Ozar and Pinal Dave. Try it out yourself and you will know what Edwin meant!)

Meeting – Writing and blogging becomes our professional identity over the time. People read the blog. Some care to comment on it, some might love it or criticize it or point out mistakes. We should take criticism positively and should be thankful to ones who point out mistakes (Q. #8 answered). During this whole process we interact with new people and our circle expands. (What can be a better example of ‘meeting’ than this post itself? I contacted Edwin through Twitter and he generously allowed me to write about his webinar.)

Some tips from Edwin…

*Always carry a note taking device with you. It can be an app on your smartphone or can be a simple notepad/ notebook and a pencil. As soon as you get an idea for your blog, write it down. This way you make sure that nothing worthwhile skips your mind.

*Schedule at least 15 minutes daily dedicated to your writing. The things which get scheduled are the things which get done! Over the week you’ll have 75 minute set aside for thinking and writing your blog. (Q. #6 answered)

*Use down time as ‘think time’. (Q. #4 answered) Whenever you find some free time (during lunch or waiting for something or travelling), think about your next writing assignment. This way, you can add certain time for your writing from busy schedule.

*Write in simple words. Storytelling is the best way of communicating. You should be able to convey your ideas easily to the readers. Do not try to put too much in a single blog post. Instead, divide it into a series of articles.

*Kill distractions! Deliberately stay away from things which interrupt your concentration while preparing and writing the post. Switch off your twitter, Facebook, email etc. notifications till you finish your job.

*Write about what you love. Just be yourself. Develop your own style of writing (or telling a story!). Of course, it demands practice and you can achieve this over the time.

*Have an opinion. For example, share what you liked or disliked about particular SQL Server feature.

*You can select any medium or tool to publish your writing. More important is to keep on writing. (In fact Edwin did not speak at all about tools or sites for publishing blog posts. It was only in Q&A session this question was raised. Edwin advised to save writings in document form on local machine as well.) (Q. #3 answered)

*And the most important – JUST DO IT!  Do not think about end result of your writing. Enjoy the process. Once you finish your article and you look back, you are not at the same place where you started. You have definitely moved a little ahead.

justdoit2

I cannot thank enough to Edwin and ‘PASS Professional Development Virtual Chapter’ for their time and efforts for organizing this wonderful webinar. It was really an encouraging learning experience.

Thanks!

Msg 25718, Level 16, State 3, Line 13 The log file name “(null)” is invalid…

Hello!

Few days back, I was working on a task to find fail-over details of ‘AlwaysOn’ Availability Groups (AGs). SQL Server tracks health of ‘AlwaysOn’ AGs through Extended Events. When an AlwaysOn group is created; an extended events’ session, named ‘AlwaysOn_health’ is also created. This session captures many AlwaysOn events. Following is a list of articles which describe how this extended events session can be used to find failover details.

  1. StackExchange article – Find AG fail-over using T-SQL.
  2. A WordPress blog – Another way to find AG fail-over using T-SQL.
  3. MSSQLTIPS article – A detailed description of how to monitor AlwaysOn using SSMS GUI.

An AG fail-over is recorded with ‘availability_replica_state_change’ extended event. I used a query from ‘StackExchange’ article to find the details I need.

But on few SQL instances, I was getting following error –
Msg 25718, Level 16, State 3, Line 13
The log file name “(null)” is invalid.  Verify that the file exists and that the SQL Server service account has access to it.

After some investigation, I noticed that ‘AlwaysOn_health’ extended events session was in stopped state. As a result, there was no extended events’ log file (xel file) created. If an AG is created using ‘New AG wizard’, then this extended events session will start automatically. Else, you need to make sure to start it manually after creating AG. After starting ‘AlwaysOn_health’ session (and waiting for few minutes to allow xel file to be created!), query executed successfully.

eesessaion

XEL file path –xel_path2

Though I could execute query now, obviously it could not fetch historical fail-over data. To get the details of fail-over events occurred in past, I used old method of searching SQL error log. Whenever AlwaysOn AG failover occurs, messages similar to following one are logged in SQL error log and Windows EventLog–

The state of the local availability replica in availability group <AG Name> has changed from ‘PRIMARY_PENDING’ to ‘PRIMARY_NORMAL’. The replica state changed because of either a startup, a failover, a communication issue, or a cluster error. For more information, see the availability group dashboard, SQL Server error log, Windows Server Failover Cluster management console or Windows Server Failover Cluster log.

We can filter out fail-over events using ‘filter’ option of SQL error log GUI, as shown in next screen shot. I have used a string from above mentioned message for ‘Message contains text’ filter parameter. We can narrow down search using other filters like date range or event ID. Then ‘Export’ function can be used to save filtered data in text or CSV format.

errorlogfilter2_1

Alternatively, we can use ‘xp_ReadErrorLog’ to filter out fail-over events –
EXEC xp_ReadErrorLog 0, 1,”<AG Name>”, “The replica state changed because of either a startup, a failover, a communication issue “

Above query searches latest (first parameter value – ‘0’) SQL error log (second parameter value – ‘1’) file for the strings matching to AG name (3rd parameter) or to the part of error log message (4th parameter).
Please note that, we can search records from available SQL error log files only. If SQL error log is recycled, then we lose historical data.

In this way, we can use SQL Errorlog to find AG fail-over events when ‘AlwaysOn_helath’ session is stopped.

Thanks!

 

…Cannot insert duplicate key row in object sys.syscommittab’ with unique index ‘si_xdes_id’…

Hello!
A database backup plan was failing on one server was failing since last few days. Out of many user databases hosted on that instance, backup was failing for only one database. Following message was logged in job history.

Executing the query “BACKUP DATABASE <DATABASE NAME> TO  DISK = N’G:\MSSQL…” failed with the following error: Cannot insert duplicate key row in object sys.syscommittab’ with unique index ‘si_xdes_id’. The duplicate key value is (34204549973).  Failed to flush the commit table to disk in dbid 7 due to error 2601. Check the errorlog for more information.  One or more recovery units belonging to database ‘<DATABASE NAME>’ failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.  …  The package execution fa…  The step failed.

Looking at this error message, it first appeared to be some issue of inconsistent data or possibly insufficient disk space. But searching for this error revealed that, this was related to ‘change tracking’. It is a known issue with SQL Server. Please refer to this KB link  for more details. But the SQL instance in question was at higher version level (11.0.6020.0) than discussed in this KB article. So I was bit unsure to go in ‘change tracking’ direction. But decided to give it a try. After disabling ‘change tracking’ on that database, backup executed successfully. This confirmed, issue was indeed with ‘change tracking’. Following are the steps I followed.

You can check whether change tracking is enabled from database options’ screen.

capture2

Alternatively you can use following query to find ‘change tracking’ enabled databases.
SELECT DB_NAME(database_id) AS DBName,* FROM sys.change_tracking_databases;

Unless you disable change tracking on the table level, you cannot disable change tracking for database.  You’ll get following error if you try to disable change tracking directly on database level.
Msg 22115, Level 16, State 1, Line 2
Change tracking is enabled for one or more tables in database <DatabaseName>.Disable change tracking on each table before disabling it for the database. Use the sys.change_tracking_tables catalog view to obtain a list of tables for which change tracking is enabled.
Msg 5069, Level 16, State 1, Line 2
ALTER DATABASE statement failed.

So you need to first find the ‘change tracking’ enabled tables using following query.
SELECT object_name(object_id), *  FROM sys.change_tracking_tables;

Use following ALTER TABLE statement to disable change tracking for particular table.
ALTER TABLE <Table Name> 
DISABLE CHANGE_TRACKING;

After disabling change tracking for all the tables, it can now be disabled for database using following query.
ALTER DATABASE <DatabaseName>
SET CHANGE_TRACKING = OFF

Following are some links which I found useful while working on this issue.
* Change tracking T-SQL – MSDN Link
* A good introduction to change tracking  – MSSQLTIPS link
* Change tracking in depth – Kendra Little’s article

Thanks!