Showing posts with label plans. Show all posts
Showing posts with label plans. Show all posts

Sunday, March 11, 2012

Differential Backups in Maintenance Plans

Using SQL Server2000, I don't see any way of setting up a maint plan to
include nightly differential backups. I would like to use a maint plan as it
allows you to select "All user DBs". Since we add databases quite a lot, I
like the dynamic aspect of maint plans.
Does anyone know of workaround or even a script that would allow me to
dynamically backup each user database? I've looked at sp_msforeachdb, but
can't seem to get it to work as I'm not too good at scripting.
Thanks
RonRon,
In SQL Server 2000, as you have discovered, there is no way to do
differential backups. Technet when discussing SQL Server 2000
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlbackuprest.mspx#ELYAG
describes how to set up differential backups one database at a time,
creating a schedule for each differential backup. By the way, please note
that _master_ cannot be backed up differentially.
You, of course, do not want to do that, but if you do it for one database it
will give you the working syntax. E.g.
BACKUP DATABASE MyDatabase TO
DISK = N'\\BackupServer\MyDatabase_diff_200710230021.BAK'
WITH NOINIT , NOUNLOAD , DIFFERENTIAL ,
NAME = N'MyDatabase backup', NOSKIP , STATS = 10, NOFORMAT
Now, from that perhaps you can create a script from that. Here is one that
only PRINTs the command, but you can change this to EXECUTE it instead.
sp_msforeachdb @.command1='
DECLARE @.BuildStr NVARCHAR(500)
SET @.BuildStr = CONVERT(NVARCHAR(20),GETDATE(),120)
SET @.BuildStr = REPLACE(REPLACE(REPLACE(@.BuildStr,''
'',''''),'':'',''''),''-'','''')
SET @.BuildStr = ''
Backup Database $ TO DISK = N''''\\BackupServer\$_diff_''+@.BuildStr
SET @.BuildStr = @.Buildstr + '''
WITH NOINIT , NOUNLOAD , DIFFERENTIAL ,''
SET @.BuildStr = @.Buildstr + ''
NAME = N''''$ backup'''', NOSKIP , STATS = 10, NOFORMAT ''
IF ''$''<>''master''
PRINT @.BuildStr'
,@.replacechar='$'
Of course, the backups need to be in the same location as other backups for
your maintenance plan deletion of old files to include these as well. Also,
remember that sp_msforeachdb is unsupported.
RLF
"Ron" <Ron@.discussions.microsoft.com> wrote in message
news:44788552-025A-4A26-8387-F31F069AE7D2@.microsoft.com...
> Using SQL Server2000, I don't see any way of setting up a maint plan to
> include nightly differential backups. I would like to use a maint plan as
> it
> allows you to select "All user DBs". Since we add databases quite a lot,
> I
> like the dynamic aspect of maint plans.
> Does anyone know of workaround or even a script that would allow me to
> dynamically backup each user database? I've looked at sp_msforeachdb, but
> can't seem to get it to work as I'm not too good at scripting.
> Thanks
> Ron|||Thank you Russell!
"Russell Fields" wrote:
> Ron,
> In SQL Server 2000, as you have discovered, there is no way to do
> differential backups. Technet when discussing SQL Server 2000
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlbackuprest.mspx#ELYAG
> describes how to set up differential backups one database at a time,
> creating a schedule for each differential backup. By the way, please note
> that _master_ cannot be backed up differentially.
> You, of course, do not want to do that, but if you do it for one database it
> will give you the working syntax. E.g.
> BACKUP DATABASE MyDatabase TO
> DISK = N'\\BackupServer\MyDatabase_diff_200710230021.BAK'
> WITH NOINIT , NOUNLOAD , DIFFERENTIAL ,
> NAME = N'MyDatabase backup', NOSKIP , STATS = 10, NOFORMAT
> Now, from that perhaps you can create a script from that. Here is one that
> only PRINTs the command, but you can change this to EXECUTE it instead.
> sp_msforeachdb @.command1='
> DECLARE @.BuildStr NVARCHAR(500)
> SET @.BuildStr = CONVERT(NVARCHAR(20),GETDATE(),120)
> SET @.BuildStr = REPLACE(REPLACE(REPLACE(@.BuildStr,''
> '',''''),'':'',''''),''-'','''')
> SET @.BuildStr = ''
> Backup Database $ TO DISK = N''''\\BackupServer\$_diff_''+@.BuildStr
> SET @.BuildStr = @.Buildstr + '''
> WITH NOINIT , NOUNLOAD , DIFFERENTIAL ,''
> SET @.BuildStr = @.Buildstr + ''
> NAME = N''''$ backup'''', NOSKIP , STATS = 10, NOFORMAT ''
> IF ''$''<>''master''
> PRINT @.BuildStr'
> ,@.replacechar='$'
> Of course, the backups need to be in the same location as other backups for
> your maintenance plan deletion of old files to include these as well. Also,
> remember that sp_msforeachdb is unsupported.
> RLF
> "Ron" <Ron@.discussions.microsoft.com> wrote in message
> news:44788552-025A-4A26-8387-F31F069AE7D2@.microsoft.com...
> > Using SQL Server2000, I don't see any way of setting up a maint plan to
> > include nightly differential backups. I would like to use a maint plan as
> > it
> > allows you to select "All user DBs". Since we add databases quite a lot,
> > I
> > like the dynamic aspect of maint plans.
> >
> > Does anyone know of workaround or even a script that would allow me to
> > dynamically backup each user database? I've looked at sp_msforeachdb, but
> > can't seem to get it to work as I'm not too good at scripting.
> >
> > Thanks
> >
> > Ron
>
>

Differential Backup File Size

With my maintenance plans for backup jobs, I can control how long the backup
files will remain, hours, days, weeks... before deletion.
With my differential backup jobs, I don't seem to have the ability to
control the residency of a differential backup file to disk, at least I am
not aware of that capability yet, and I am new to SQL Server, and curious as
all get out about its nice features. The differential backup file just
keeps growing and growing and growing. Right now, I manually take the large
file, relocate it to another folder, then when the next differential backup
file is written to the disk location, I then delete the removed large
differential file and continue to let the new backup set grow, repeating the
process as necessary.
Is there a better way to control this process or at least automate it
somehow with T-SQL statements so the large diff file will be transferred to
a new folder until a new file is successfully generated, then deleted. I do
understand the need to maintain the differential file in correspondence with
a current full backup file and corresponding transaction logs with a
corressponding differential file got recovery purpose.
Thanks for your assistance.
Hi,
This is what
You can create two backup jobs for the differential Backups.
Let one run on Mon - Wed - Fri and the other run on Tue - Thu - Sat
On Sunday you can schedule a complete backup.
In Each of these Jobs add one more step that would get executed only if the
preceeding backup step was successful.
Let this new step be of "Operating System Command"
In the Process add the OS command to delete the file from the previous
day's differential backup or even copy the file to a different location.
You can have two separate folders, one each for the Backup Job.
Folder A for Job A and Folder B for job B
When you execute Job A on Monday, let the next step delete/move the backup
file located in the Folder B, after the diff backup is over.
Similarly when you execute Job B on Tuesday, let the next step delete/move
the backup file located in the Folder A, after the diff backup is over.
HTH
Ashish
This posting is provided "AS IS" with no warranties, and confers no rights.
|||That gives a direction to pursue, thank you.
"Ashish Ruparel [MSFT]" <v-ashrup@.online.microsoft.com> wrote in message
news:i81O3gMMEHA.3364@.cpmsftngxa10.phx.gbl...
> Hi,
> This is what
> You can create two backup jobs for the differential Backups.
> Let one run on Mon - Wed - Fri and the other run on Tue - Thu - Sat
> On Sunday you can schedule a complete backup.
> In Each of these Jobs add one more step that would get executed only if
the
> preceeding backup step was successful.
> Let this new step be of "Operating System Command"
> In the Process add the OS command to delete the file from the previous
> day's differential backup or even copy the file to a different location.
> You can have two separate folders, one each for the Backup Job.
> Folder A for Job A and Folder B for job B
> When you execute Job A on Monday, let the next step delete/move the backup
> file located in the Folder B, after the diff backup is over.
> Similarly when you execute Job B on Tuesday, let the next step delete/move
> the backup file located in the Folder A, after the diff backup is over.
> HTH
> Ashish
> This posting is provided "AS IS" with no warranties, and confers no
rights.
>

Saturday, February 25, 2012

Different query plans for view and view definition statement

I compared view query plan with query plan if I run the same statement
from view definition and get different results. View plan is more
expensive and runs longer. View contains 4 inner joins, statistics
updated for all tables. Any ideas?which version?|||SQL Server 2000, Enterprise Edition with SP4|||ysfinks (ysfinks@.gmail.com) writes:
> I compared view query plan with query plan if I run the same statement
> from view definition and get different results. View plan is more
> expensive and runs longer. View contains 4 inner joins, statistics
> updated for all tables. Any ideas?

Since you didn't share anything close to a repro, I have little idea
of you what you are doing. Since a view essential is a macro, it should
not matter that much. Then again, I've been wrong before. Anyway, it
would help if you posted the view, and the two SELECT you run.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||I narrowed down to one join. Same difference in query plans. Query cost
for 1 is 5.48%, for 2 is 94.52%
My view is:
create view dbo.sf_test as
SELECT
dbo.ManagedNodes.NodeID, dbo.ManagedNodes.SubscriptionID,
CompanyAccounts.root_account_id AS ECCRootID
FROM dbo.ManagedNodes WITH (NOLOCK)
INNER JOIN dbo.accounts CompanyAccounts WITH (NOLOCK)
ON dbo.ManagedNodes.ECCRootID = CompanyAccounts.account_id

My queries are:
1.
SELECT dbo.ManagedNodes.NodeID, dbo.ManagedNodes.SubscriptionID,
CompanyAccounts.root_account_id AS ECCRootID
FROM dbo.ManagedNodes WITH (NOLOCK)
INNER JOIN dbo.accounts CompanyAccounts WITH (NOLOCK)
ON dbo.ManagedNodes.ECCRootID = CompanyAccounts.account_id
where ECCRootID=15427
2.
select NodeID, SubscriptionID, ECCRootID
from dbo.sf_test where eccrootid=15427|||ysfinks (ysfinks@.gmail.com) writes:
> I narrowed down to one join. Same difference in query plans. Query cost
> for 1 is 5.48%, for 2 is 94.52%
> My view is:
> create view dbo.sf_test as
> SELECT
> dbo.ManagedNodes.NodeID, dbo.ManagedNodes.SubscriptionID,
> CompanyAccounts.root_account_id AS ECCRootID
> FROM dbo.ManagedNodes WITH (NOLOCK)
> INNER JOIN dbo.accounts CompanyAccounts WITH (NOLOCK)
> ON dbo.ManagedNodes.ECCRootID = CompanyAccounts.account_id
> My queries are:
> 1.
> SELECT dbo.ManagedNodes.NodeID, dbo.ManagedNodes.SubscriptionID,
> CompanyAccounts.root_account_id AS ECCRootID
> FROM dbo.ManagedNodes WITH (NOLOCK)
> INNER JOIN dbo.accounts CompanyAccounts WITH (NOLOCK)
> ON dbo.ManagedNodes.ECCRootID = CompanyAccounts.account_id
> where ECCRootID=15427
> 2.
> select NodeID, SubscriptionID, ECCRootID
> from dbo.sf_test where eccrootid=15427

I will have to admit that I don't have any good answers at this
point. But I still like to ask some questions, just to check:

Exactly how do you create the view? From Query Analyzer or Enterprise
Manager? If the latter, what happens, if you run a script in QA
where you first create the view, and then run the queries?

What happens if you take out the NOLOCK hints?

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||so if i am understanding, if you do a select against a view, it takes a
very long time.

but if copy that exact same code into query analyzer or a stored
procedure, it goes MUCH faster.

and If I am understanding correctly the issue, there will be an index
on eccrootid.

And, if I am understanding, the view won't use the index on ECCrootid,
but everything else will.
Do I have the issue correctly? If so, yup, it does that in SS2000. You
can try compiler hints in the view to FORCE it to sue the index, but
that only works sometimes.

Best workaround is to move all your views to stored procedures, and
pass the eccrootid parameter to the sproc.

I reported this 5 years ago, adn even discussed it with Erland at that
time.

Views suck.

Regards,
Doug|||A VIEW is handled two ways in SQL. The text of the VIEW is "pasted"
into the query that uses it and then the parser and optimizer handle it
as if the query had been written with a derived table. The parser can
do a lot stuff at this point, so the original view text is "spread out
all over the place".

The second way is materialize the VIEW as a temporary table. The good
news is that this materialized table can be shared by multiple users,
so the overall processing time goes down, even if each user's plan is
not optimal for their query. This is a feature of larger SQL products
like Ingres, DB2 or Oracle.

Trust in the optimizer, Luke.|||if you are going ot have a materialized view, why not just bite the
bullet and have a denormalized table hanging around that gets updated
all the time.

the optimizer is fine for 90 percent of the time.|||View was created from Query Analyzer. If I remove nolock - same result.
Another fact - if I change condition value in where clause, for some
values it gives for the view the good query plan using index for
eccrootid.
For the query simulating the view - always good plan.|||ysfinks (ysfinks@.gmail.com) writes:
> View was created from Query Analyzer. If I remove nolock - same result.
> Another fact - if I change condition value in where clause, for some
> values it gives for the view the good query plan using index for
> eccrootid.
> For the query simulating the view - always good plan.

I will have admit that I am fairly stumped at this point. For this reason
I have consulted some other people offline. No promises, but keep watching
this space.

Nevertheless, you run the two queries bracketed by

set statistics profile on
set statistics profile off

If you can put that in a file as a attachment ot on web site, to avoid
that the output is mashsed in news transport, that would be great, but
anything goes.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

Different query plans

I have 2 SQL databases which are the same and are giving me different
query plans.

select s.* from hlresults h
inner join specimens s on s.specimen_tk = h.specimen_tk
where s.site_tk = 9 and s.location in ('ABC','WIAD')
and s.date_collected between '2/1/2003' and '2/3/2006'
order by s.location, s.date_collected

Both boxes have the same configuration, the only difference is that one

of them is a cluster.

The Acluster box is taking twice as long to run the query.

I have run statistics on both, and the cluster is still creating a
bitmap and running some parallelism which the other box is not.
Also, the the first step, the A1 box estimates the rows returned to be
around 80K and the actual rows returned is about 40K - subtree cost =
248. The Acluster box estimates 400K - subtree cost=533!
After running statistics, how can it be so off?

I've also reindexed to no avail . . .

any insight would be very much appreciated. We just moved to this new
system and I hate that the db is now slower -

A1:
affinity mask -2147483648 2147483647 0 0
allow updates 0 1 0 0
awe enabled 0 1 1 1
c2 audit mode 0 1 0 0
cost threshold for parallelism 0 32767 0 0
Cross DB Ownership Chaining 0 1 0 0
cursor threshold -1 2147483647 -1 -1
default full-text language 0 2147483647 1033 1033
default language 0 9999 0 0
fill factor (%) 0 100 90 90
index create memory (KB) 704 2147483647 0 0
lightweight pooling 0 1 0 0
locks 5000 2147483647 0 0
max degree of parallelism 0 32 4 4
max server memory (MB) 4 2147483647 14336 14336
max text repl size (B) 0 2147483647 65536 65536
max worker threads 32 32767 255 255
media retention 0 365 0 0
min memory per query (KB) 512 2147483647 1024 1024
min server memory (MB) 0 2147483647 4096 4096
nested triggers 0 1 0 0
network packet size (B) 512 32767 4096 4096
open objects 0 2147483647 0 0
priority boost 0 1 0 0
query governor cost limit 0 2147483647 0 0
query wait (s) -1 2147483647 -1 -1
recovery interval (min) 0 32767 0 0
remote access 0 1 1 1
remote login timeout (s) 0 2147483647 0 0
remote proc trans 0 1 0 0
remote query timeout (s) 0 2147483647 0 0
scan for startup procs 0 1 1 1
set working set size 0 1 0 0
show advanced options 0 1 1 1
two digit year cutoff 1753 9999 2049 2049
user connections 0 32767 0 0
user options 0 32767 0 0

Acluster:
affinity mask -2147483648 2147483647 0 0
allow updates 0 1 0 0
awe enabled 0 1 1 1
c2 audit mode 0 1 0 0
cost threshold for parallelism 0 32767 0 0
Cross DB Ownership Chaining 0 1 0 0
cursor threshold -1 2147483647 -1 -1
default full-text language 0 2147483647 1033 1033
default language 0 9999 0 0
fill factor (%) 0 100 90 90
index create memory (KB) 704 2147483647 0 0
lightweight pooling 0 1 0 0
locks 5000 2147483647 0 0
max degree of parallelism 0 32 4 4
max server memory (MB) 4 2147483647 14336 14336
max text repl size (B) 0 2147483647 65536 65536
max worker threads 32 32767 255 255
media retention 0 365 0 0
min memory per query (KB) 512 2147483647 1024 1024
min server memory (MB) 0 2147483647 4095 4095
nested triggers 0 1 0 0
network packet size (B) 512 32767 4096 4096
open objects 0 2147483647 0 0
priority boost 0 1 0 0
query governor cost limit 0 2147483647 0 0
query wait (s) -1 2147483647 -1 -1
recovery interval (min) 0 32767 0 0
remote access 0 1 1 1
remote login timeout (s) 0 2147483647 0 0
remote proc trans 0 1 0 0
remote query timeout (s) 0 2147483647 0 0
scan for startup procs 0 1 1 1
set working set size 0 1 0 0
show advanced options 0 1 1 1
two digit year cutoff 1753 9999 2049 2049
user connections 0 32767 0 0
user options 0 32767 0 0traceable1 (tracykc@.gmail.com) writes:
> I have 2 SQL databases which are the same and are giving me different
> query plans.
>...
> Both boxes have the same configuration, the only difference is that one
> of them is a cluster.

So they have the same number of CPUs?

> I have run statistics on both, and the cluster is still creating a
> bitmap and running some parallelism which the other box is not.
> Also, the the first step, the A1 box estimates the rows returned to be
> around 80K and the actual rows returned is about 40K - subtree cost =
> 248. The Acluster box estimates 400K - subtree cost=533!
> After running statistics, how can it be so off?

You could try running UPDATE STATISTICS WITH FULLSCAN on the involved
tables, to be really sure that you have factored that part out.

Also, try adding OPTION (MAXDOP 1) on the cluster. Parallelism is
sometimes good, but sometimes it's bad...

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

Friday, February 24, 2012

Different execution plans from c# code and MSSQLServerManagementStudio

Hi all,
I've been looking into a strange problem in the last weeks.
We use SQL Server 2005.
We have some code written in C# which goes to the database, executes
some stored procedures and retrieves the results.
Now, the procedures are a little bit strange, as in a procedure we
dynamically create the SELECT statement with different WHERE clauses
based on the parameters we receive and then use exec sp_executesql to
actually run the statement. (I assume that this prevents sql server to
compile the final statements and reuse the compiled execution plan,
right?)
All is well for most of the time. Sometimes though some of the
procedures get very slow.
For example a procedure which would usually run in 100ms now takes up
to 5 seconds. Or one which normally ran in 1 sec, runs more than 30
second.
I started examining with the SQL Server Profiler, and got strange
results:
- I launch the C# code, it calls the SP and the trace shows a Duration
of 5000 ms.
- I copy the statement executed in SQL Management Studio, run from
there and the Duration is 90ms
(Note that when I compared the SQLManagementStudio and C# runs of the
procedures, I used the same parameters).
I configured the profiler to show the execution plans, and the two
plans are different. It is clear that when I run the query from the SQL
management studio, it uses the right indexes, but when running from the
C# code it won't use the good indexes.
Interestingly I can run them alternatively several times (once from SQL
Management Studio, once from C#, again from SQLMS, again from C#,
a.s.o), and the results are unchanged: good performance from SQLMS, bad
performance from C#.
As our procedures generate dynamic sql statements, I assume that no
execution plan is reused here. The two exceutino plans generated are
just plain different.
If I restart the SQL Server, everything goes back to normal: the
procedures run fast from C#. After a day or two the performance problem
pops up in the same place, or in some other (but very similar)
procedure.
If I modify the procedure to include a WITH(Index(...)) hint,
everything is fast :-)
I also qualify the owner of the stored procedure when I call it.
(dbo.sp_ProcName, xxx.sp_AnotherProcName a.s.o).
In order to minimize the effect of the ,net framework classes, I
reproduced the calls using SqlConnection/SqlCommand,
OleDBConnection/OleDBCommand too. One of my colleagues also reproduced
it with code written in VB6 (non .net) (running from a different
machine).
Any idea why the C# call will generate a bad execution plan? And how I
could make sure it generates the good one (except using the
WITH(INDEX()) hints)?
Thank you,
Trucza Csaba
Hi
> As our procedures generate dynamic sql statements, I assume that no
> execution plan is reused here. The two exceutino plans generated are
> just plain different.
No, SQL Server could resuse an execution plan otherwise we would have seen
RECOMPILE statement in Profiler
Well , there are many scenarios that make hurt perfomance as we know as
"parameter sniffing"
Search on interenet for this tiltle to get more details as well please read
this article may shed some lights on.
http://www.sql-server-performance.com/q&a132.asp
http://www.sql-server-performance.co...ess_tuning.asp
<csaba.trucza@.gmail.com> wrote in message
news:1143715907.378509.198330@.j33g2000cwa.googlegr oups.com...
> Hi all,
> I've been looking into a strange problem in the last weeks.
> We use SQL Server 2005.
> We have some code written in C# which goes to the database, executes
> some stored procedures and retrieves the results.
> Now, the procedures are a little bit strange, as in a procedure we
> dynamically create the SELECT statement with different WHERE clauses
> based on the parameters we receive and then use exec sp_executesql to
> actually run the statement. (I assume that this prevents sql server to
> compile the final statements and reuse the compiled execution plan,
> right?)
> All is well for most of the time. Sometimes though some of the
> procedures get very slow.
> For example a procedure which would usually run in 100ms now takes up
> to 5 seconds. Or one which normally ran in 1 sec, runs more than 30
> second.
> I started examining with the SQL Server Profiler, and got strange
> results:
> - I launch the C# code, it calls the SP and the trace shows a Duration
> of 5000 ms.
> - I copy the statement executed in SQL Management Studio, run from
> there and the Duration is 90ms
> (Note that when I compared the SQLManagementStudio and C# runs of the
> procedures, I used the same parameters).
> I configured the profiler to show the execution plans, and the two
> plans are different. It is clear that when I run the query from the SQL
> management studio, it uses the right indexes, but when running from the
> C# code it won't use the good indexes.
> Interestingly I can run them alternatively several times (once from SQL
> Management Studio, once from C#, again from SQLMS, again from C#,
> a.s.o), and the results are unchanged: good performance from SQLMS, bad
> performance from C#.
> As our procedures generate dynamic sql statements, I assume that no
> execution plan is reused here. The two exceutino plans generated are
> just plain different.
> If I restart the SQL Server, everything goes back to normal: the
> procedures run fast from C#. After a day or two the performance problem
> pops up in the same place, or in some other (but very similar)
> procedure.
> If I modify the procedure to include a WITH(Index(...)) hint,
> everything is fast :-)
> I also qualify the owner of the stored procedure when I call it.
> (dbo.sp_ProcName, xxx.sp_AnotherProcName a.s.o).
> In order to minimize the effect of the ,net framework classes, I
> reproduced the calls using SqlConnection/SqlCommand,
> OleDBConnection/OleDBCommand too. One of my colleagues also reproduced
> it with code written in VB6 (non .net) (running from a different
> machine).
> Any idea why the C# call will generate a bad execution plan? And how I
> could make sure it generates the good one (except using the
> WITH(INDEX()) hints)?
> Thank you,
> Trucza Csaba
>
|||Hi Uri,
Uri wrote:
> No, SQL Server could resuse an execution plan otherwise we would have seen
> RECOMPILE statement in Profiler
That SQL Server reuses the execution plan of dynamic statements came to
me as a major surprise. Pleasant one.
I'm still confused though about why the execution plan differs when I
execute the same stored procedure with the same parameters from C# code
and from SQL Management Studio.
Cheers,
Csaba
|||> That SQL Server reuses the execution plan of dynamic statements came to
> me as a major surprise. Pleasant one.
But not always the best thing to to. In most cases, you will get an "exact text" matching of the
plan. If you search for different names, for example, you will have different plans for what in the
application is the same query. I've seen cases when the customer had 10,000 plans for the same query
in the plan cache. You can identify this in syscacheobjects by checking for "AdHoc" object types.

> I'm still confused though about why the execution plan differs when I
> execute the same stored procedure with the same parameters from C# code
> and from SQL Management Studio.
One possible reason can be different SET options.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
<csaba.trucza@.gmail.com> wrote in message
news:1143803884.817874.61660@.g10g2000cwb.googlegro ups.com...
> Hi Uri,
> Uri wrote:
> That SQL Server reuses the execution plan of dynamic statements came to
> me as a major surprise. Pleasant one.
> I'm still confused though about why the execution plan differs when I
> execute the same stored procedure with the same parameters from C# code
> and from SQL Management Studio.
> Cheers,
> Csaba
>

Different execution plans from c# code and MSSQLServerManagementStudio

Hi all,
I've been looking into a strange problem in the last weeks.
We use SQL Server 2005.
We have some code written in C# which goes to the database, executes
some stored procedures and retrieves the results.
Now, the procedures are a little bit strange, as in a procedure we
dynamically create the SELECT statement with different WHERE clauses
based on the parameters we receive and then use exec sp_executesql to
actually run the statement. (I assume that this prevents sql server to
compile the final statements and reuse the compiled execution plan,
right?)
All is well for most of the time. Sometimes though some of the
procedures get very slow.
For example a procedure which would usually run in 100ms now takes up
to 5 seconds. Or one which normally ran in 1 sec, runs more than 30
second.
I started examining with the SQL Server Profiler, and got strange
results:
- I launch the C# code, it calls the SP and the trace shows a Duration
of 5000 ms.
- I copy the statement executed in SQL Management Studio, run from
there and the Duration is 90ms
(Note that when I compared the SQLManagementStudio and C# runs of the
procedures, I used the same parameters).
I configured the profiler to show the execution plans, and the two
plans are different. It is clear that when I run the query from the SQL
management studio, it uses the right indexes, but when running from the
C# code it won't use the good indexes.
Interestingly I can run them alternatively several times (once from SQL
Management Studio, once from C#, again from SQLMS, again from C#,
a.s.o), and the results are unchanged: good performance from SQLMS, bad
performance from C#.
As our procedures generate dynamic sql statements, I assume that no
execution plan is reused here. The two exceutino plans generated are
just plain different.
If I restart the SQL Server, everything goes back to normal: the
procedures run fast from C#. After a day or two the performance problem
pops up in the same place, or in some other (but very similar)
procedure.
If I modify the procedure to include a WITH(Index(...)) hint,
everything is fast :-)
I also qualify the owner of the stored procedure when I call it.
(dbo.sp_ProcName, xxx.sp_AnotherProcName a.s.o).
In order to minimize the effect of the ,net framework classes, I
reproduced the calls using SqlConnection/SqlCommand,
OleDBConnection/OleDBCommand too. One of my colleagues also reproduced
it with code written in VB6 (non .net) (running from a different
machine).
Any idea why the C# call will generate a bad execution plan? And how I
could make sure it generates the good one (except using the
WITH(INDEX()) hints)?
Thank you,
Trucza CsabaHi
> As our procedures generate dynamic sql statements, I assume that no
> execution plan is reused here. The two exceutino plans generated are
> just plain different.
No, SQL Server could resuse an execution plan otherwise we would have seen
RECOMPILE statement in Profiler
Well , there are many scenarios that make hurt perfomance as we know as
"parameter sniffing"
Search on interenet for this tiltle to get more details as well please read
this article may shed some lights on.
http://www.sql-server-performance.com/q&a132.asp
http://www.sql-server-performance.com/nb_query_process_tuning.asp
<csaba.trucza@.gmail.com> wrote in message
news:1143715907.378509.198330@.j33g2000cwa.googlegroups.com...
> Hi all,
> I've been looking into a strange problem in the last weeks.
> We use SQL Server 2005.
> We have some code written in C# which goes to the database, executes
> some stored procedures and retrieves the results.
> Now, the procedures are a little bit strange, as in a procedure we
> dynamically create the SELECT statement with different WHERE clauses
> based on the parameters we receive and then use exec sp_executesql to
> actually run the statement. (I assume that this prevents sql server to
> compile the final statements and reuse the compiled execution plan,
> right?)
> All is well for most of the time. Sometimes though some of the
> procedures get very slow.
> For example a procedure which would usually run in 100ms now takes up
> to 5 seconds. Or one which normally ran in 1 sec, runs more than 30
> second.
> I started examining with the SQL Server Profiler, and got strange
> results:
> - I launch the C# code, it calls the SP and the trace shows a Duration
> of 5000 ms.
> - I copy the statement executed in SQL Management Studio, run from
> there and the Duration is 90ms
> (Note that when I compared the SQLManagementStudio and C# runs of the
> procedures, I used the same parameters).
> I configured the profiler to show the execution plans, and the two
> plans are different. It is clear that when I run the query from the SQL
> management studio, it uses the right indexes, but when running from the
> C# code it won't use the good indexes.
> Interestingly I can run them alternatively several times (once from SQL
> Management Studio, once from C#, again from SQLMS, again from C#,
> a.s.o), and the results are unchanged: good performance from SQLMS, bad
> performance from C#.
> As our procedures generate dynamic sql statements, I assume that no
> execution plan is reused here. The two exceutino plans generated are
> just plain different.
> If I restart the SQL Server, everything goes back to normal: the
> procedures run fast from C#. After a day or two the performance problem
> pops up in the same place, or in some other (but very similar)
> procedure.
> If I modify the procedure to include a WITH(Index(...)) hint,
> everything is fast :-)
> I also qualify the owner of the stored procedure when I call it.
> (dbo.sp_ProcName, xxx.sp_AnotherProcName a.s.o).
> In order to minimize the effect of the ,net framework classes, I
> reproduced the calls using SqlConnection/SqlCommand,
> OleDBConnection/OleDBCommand too. One of my colleagues also reproduced
> it with code written in VB6 (non .net) (running from a different
> machine).
> Any idea why the C# call will generate a bad execution plan? And how I
> could make sure it generates the good one (except using the
> WITH(INDEX()) hints)?
> Thank you,
> Trucza Csaba
>|||As Uri notes the execution plan may still be reused and depending on the
WHERE clause you build may not be optimal. Look at using the "WITH
RECOMPILE" option when creating or running your procedure and see if that
keeps the performance stable.
"Uri Dimant" wrote:
> Hi
> > As our procedures generate dynamic sql statements, I assume that no
> > execution plan is reused here. The two exceutino plans generated are
> > just plain different.
> No, SQL Server could resuse an execution plan otherwise we would have seen
> RECOMPILE statement in Profiler
> Well , there are many scenarios that make hurt perfomance as we know as
> "parameter sniffing"
> Search on interenet for this tiltle to get more details as well please read
> this article may shed some lights on.
> http://www.sql-server-performance.com/q&a132.asp
> http://www.sql-server-performance.com/nb_query_process_tuning.asp
>
> <csaba.trucza@.gmail.com> wrote in message
> news:1143715907.378509.198330@.j33g2000cwa.googlegroups.com...
> > Hi all,
> >
> > I've been looking into a strange problem in the last weeks.
> >
> > We use SQL Server 2005.
> >
> > We have some code written in C# which goes to the database, executes
> > some stored procedures and retrieves the results.
> >
> > Now, the procedures are a little bit strange, as in a procedure we
> > dynamically create the SELECT statement with different WHERE clauses
> > based on the parameters we receive and then use exec sp_executesql to
> > actually run the statement. (I assume that this prevents sql server to
> > compile the final statements and reuse the compiled execution plan,
> > right?)
> >
> > All is well for most of the time. Sometimes though some of the
> > procedures get very slow.
> >
> > For example a procedure which would usually run in 100ms now takes up
> > to 5 seconds. Or one which normally ran in 1 sec, runs more than 30
> > second.
> >
> > I started examining with the SQL Server Profiler, and got strange
> > results:
> >
> > - I launch the C# code, it calls the SP and the trace shows a Duration
> > of 5000 ms.
> > - I copy the statement executed in SQL Management Studio, run from
> > there and the Duration is 90ms
> >
> > (Note that when I compared the SQLManagementStudio and C# runs of the
> > procedures, I used the same parameters).
> >
> > I configured the profiler to show the execution plans, and the two
> > plans are different. It is clear that when I run the query from the SQL
> > management studio, it uses the right indexes, but when running from the
> > C# code it won't use the good indexes.
> >
> > Interestingly I can run them alternatively several times (once from SQL
> > Management Studio, once from C#, again from SQLMS, again from C#,
> > a.s.o), and the results are unchanged: good performance from SQLMS, bad
> > performance from C#.
> >
> > As our procedures generate dynamic sql statements, I assume that no
> > execution plan is reused here. The two exceutino plans generated are
> > just plain different.
> >
> > If I restart the SQL Server, everything goes back to normal: the
> > procedures run fast from C#. After a day or two the performance problem
> > pops up in the same place, or in some other (but very similar)
> > procedure.
> >
> > If I modify the procedure to include a WITH(Index(...)) hint,
> > everything is fast :-)
> >
> > I also qualify the owner of the stored procedure when I call it.
> > (dbo.sp_ProcName, xxx.sp_AnotherProcName a.s.o).
> >
> > In order to minimize the effect of the ,net framework classes, I
> > reproduced the calls using SqlConnection/SqlCommand,
> > OleDBConnection/OleDBCommand too. One of my colleagues also reproduced
> > it with code written in VB6 (non .net) (running from a different
> > machine).
> >
> > Any idea why the C# call will generate a bad execution plan? And how I
> > could make sure it generates the good one (except using the
> > WITH(INDEX()) hints)?
> >
> > Thank you,
> > Trucza Csaba
> >
>
>|||Hi Uri,
Uri wrote:
> No, SQL Server could resuse an execution plan otherwise we would have seen
> RECOMPILE statement in Profiler
That SQL Server reuses the execution plan of dynamic statements came to
me as a major surprise. Pleasant one.
I'm still confused though about why the execution plan differs when I
execute the same stored procedure with the same parameters from C# code
and from SQL Management Studio.
Cheers,
Csaba|||Hi Wayne,
> Look at using the "WITH RECOMPILE" option when creating or running your procedure
> and see if that keeps the performance stable.
As reproducong the original situation is not as simple as I'd like to,
I still have to wait and see if we get into trouble with the WITH
RECOMPILE option.
In some troublesome procedures I included the WITH RECOMPILE option, in
others I included WITH(INDEX(...)) hints.
Now I'm waiting to see if any of them start behaving badly.
I observed though that WITH RECOMPILE slows down the execution a little
bit.
OTOH it will always produce optimized execution plans. Right?
Greetings,
Csaba|||> That SQL Server reuses the execution plan of dynamic statements came to
> me as a major surprise. Pleasant one.
But not always the best thing to to. In most cases, you will get an "exact text" matching of the
plan. If you search for different names, for example, you will have different plans for what in the
application is the same query. I've seen cases when the customer had 10,000 plans for the same query
in the plan cache. You can identify this in syscacheobjects by checking for "AdHoc" object types.
> I'm still confused though about why the execution plan differs when I
> execute the same stored procedure with the same parameters from C# code
> and from SQL Management Studio.
One possible reason can be different SET options.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
<csaba.trucza@.gmail.com> wrote in message
news:1143803884.817874.61660@.g10g2000cwb.googlegroups.com...
> Hi Uri,
> Uri wrote:
>> No, SQL Server could resuse an execution plan otherwise we would have seen
>> RECOMPILE statement in Profiler
> That SQL Server reuses the execution plan of dynamic statements came to
> me as a major surprise. Pleasant one.
> I'm still confused though about why the execution plan differs when I
> execute the same stored procedure with the same parameters from C# code
> and from SQL Management Studio.
> Cheers,
> Csaba
>|||HI Tibor,
You were right. The ARITHABORT was the culprit. By default SMS has this
setting ON. The connections coming from ADO.NET did not set it
explicitly, so they got it from the database setting (I think), which
was off.
(And having this at hand, I googled +ARITHABORT +"execution plan" and
it seems I'm not alone with this problem).
This explains why the execution plan was not reused.
What still puzzles me is why the two generated execution plans are so
much different.

Different execution plans from c# code and MSSQLServerManagementStudio

Hi all,
I've been looking into a strange problem in the last weeks.
We use SQL Server 2005.
We have some code written in C# which goes to the database, executes
some stored procedures and retrieves the results.
Now, the procedures are a little bit strange, as in a procedure we
dynamically create the SELECT statement with different WHERE clauses
based on the parameters we receive and then use exec sp_executesql to
actually run the statement. (I assume that this prevents sql server to
compile the final statements and reuse the compiled execution plan,
right?)
All is well for most of the time. Sometimes though some of the
procedures get very slow.
For example a procedure which would usually run in 100ms now takes up
to 5 seconds. Or one which normally ran in 1 sec, runs more than 30
second.
I started examining with the SQL Server Profiler, and got strange
results:
- I launch the C# code, it calls the SP and the trace shows a Duration
of 5000 ms.
- I copy the statement executed in SQL Management Studio, run from
there and the Duration is 90ms
(Note that when I compared the SQLManagementStudio and C# runs of the
procedures, I used the same parameters).
I configured the profiler to show the execution plans, and the two
plans are different. It is clear that when I run the query from the SQL
management studio, it uses the right indexes, but when running from the
C# code it won't use the good indexes.
Interestingly I can run them alternatively several times (once from SQL
Management Studio, once from C#, again from SQLMS, again from C#,
a.s.o), and the results are unchanged: good performance from SQLMS, bad
performance from C#.
As our procedures generate dynamic sql statements, I assume that no
execution plan is reused here. The two exceutino plans generated are
just plain different.
If I restart the SQL Server, everything goes back to normal: the
procedures run fast from C#. After a day or two the performance problem
pops up in the same place, or in some other (but very similar)
procedure.
If I modify the procedure to include a WITH(Index(...)) hint,
everything is fast :-)
I also qualify the owner of the stored procedure when I call it.
(dbo.sp_ProcName, xxx.sp_AnotherProcName a.s.o).
In order to minimize the effect of the ,net framework classes, I
reproduced the calls using SqlConnection/SqlCommand,
OleDBConnection/OleDBCommand too. One of my colleagues also reproduced
it with code written in VB6 (non .net) (running from a different
machine).
Any idea why the C# call will generate a bad execution plan? And how I
could make sure it generates the good one (except using the
WITH(INDEX()) hints)?
Thank you,
Trucza CsabaHi
> As our procedures generate dynamic sql statements, I assume that no
> execution plan is reused here. The two exceutino plans generated are
> just plain different.
No, SQL Server could resuse an execution plan otherwise we would have seen
RECOMPILE statement in Profiler
Well , there are many scenarios that make hurt perfomance as we know as
"parameter sniffing"
Search on interenet for this tiltle to get more details as well please read
this article may shed some lights on.
http://www.sql-server-performance.com/q&a132.asp
http://www.sql-server-performance.c...cess_tuning.asp
<csaba.trucza@.gmail.com> wrote in message
news:1143715907.378509.198330@.j33g2000cwa.googlegroups.com...
> Hi all,
> I've been looking into a strange problem in the last weeks.
> We use SQL Server 2005.
> We have some code written in C# which goes to the database, executes
> some stored procedures and retrieves the results.
> Now, the procedures are a little bit strange, as in a procedure we
> dynamically create the SELECT statement with different WHERE clauses
> based on the parameters we receive and then use exec sp_executesql to
> actually run the statement. (I assume that this prevents sql server to
> compile the final statements and reuse the compiled execution plan,
> right?)
> All is well for most of the time. Sometimes though some of the
> procedures get very slow.
> For example a procedure which would usually run in 100ms now takes up
> to 5 seconds. Or one which normally ran in 1 sec, runs more than 30
> second.
> I started examining with the SQL Server Profiler, and got strange
> results:
> - I launch the C# code, it calls the SP and the trace shows a Duration
> of 5000 ms.
> - I copy the statement executed in SQL Management Studio, run from
> there and the Duration is 90ms
> (Note that when I compared the SQLManagementStudio and C# runs of the
> procedures, I used the same parameters).
> I configured the profiler to show the execution plans, and the two
> plans are different. It is clear that when I run the query from the SQL
> management studio, it uses the right indexes, but when running from the
> C# code it won't use the good indexes.
> Interestingly I can run them alternatively several times (once from SQL
> Management Studio, once from C#, again from SQLMS, again from C#,
> a.s.o), and the results are unchanged: good performance from SQLMS, bad
> performance from C#.
> As our procedures generate dynamic sql statements, I assume that no
> execution plan is reused here. The two exceutino plans generated are
> just plain different.
> If I restart the SQL Server, everything goes back to normal: the
> procedures run fast from C#. After a day or two the performance problem
> pops up in the same place, or in some other (but very similar)
> procedure.
> If I modify the procedure to include a WITH(Index(...)) hint,
> everything is fast :-)
> I also qualify the owner of the stored procedure when I call it.
> (dbo.sp_ProcName, xxx.sp_AnotherProcName a.s.o).
> In order to minimize the effect of the ,net framework classes, I
> reproduced the calls using SqlConnection/SqlCommand,
> OleDBConnection/OleDBCommand too. One of my colleagues also reproduced
> it with code written in VB6 (non .net) (running from a different
> machine).
> Any idea why the C# call will generate a bad execution plan? And how I
> could make sure it generates the good one (except using the
> WITH(INDEX()) hints)?
> Thank you,
> Trucza Csaba
>|||Hi Uri,
Uri wrote:
> No, SQL Server could resuse an execution plan otherwise we would have seen
> RECOMPILE statement in Profiler
That SQL Server reuses the execution plan of dynamic statements came to
me as a major surprise. Pleasant one.
I'm still confused though about why the execution plan differs when I
execute the same stored procedure with the same parameters from C# code
and from SQL Management Studio.
Cheers,
Csaba|||> That SQL Server reuses the execution plan of dynamic statements came to
> me as a major surprise. Pleasant one.
But not always the best thing to to. In most cases, you will get an "exact t
ext" matching of the
plan. If you search for different names, for example, you will have differen
t plans for what in the
application is the same query. I've seen cases when the customer had 10,000
plans for the same query
in the plan cache. You can identify this in syscacheobjects by checking for
"AdHoc" object types.

> I'm still confused though about why the execution plan differs when I
> execute the same stored procedure with the same parameters from C# code
> and from SQL Management Studio.
One possible reason can be different SET options.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
<csaba.trucza@.gmail.com> wrote in message
news:1143803884.817874.61660@.g10g2000cwb.googlegroups.com...
> Hi Uri,
> Uri wrote:
> That SQL Server reuses the execution plan of dynamic statements came to
> me as a major surprise. Pleasant one.
> I'm still confused though about why the execution plan differs when I
> execute the same stored procedure with the same parameters from C# code
> and from SQL Management Studio.
> Cheers,
> Csaba
>|||HI Tibor,
You were right. The ARITHABORT was the culprit. By default SMS has this
setting ON. The connections coming from ADO.NET did not set it
explicitly, so they got it from the database setting (I think), which
was off.
(And having this at hand, I googled +ARITHABORT +"execution plan" and
it seems I'm not alone with this problem).
This explains why the execution plan was not reused.
What still puzzles me is why the two generated execution plans are so
much different.

Different Execution Plans -> Same Query, Same Database, Different SQL Server Install

Background: Same database, restored from development to production.

Indexes up to date. Service Pack 3 on Production. Originally RTM on Development, upgraded to Sp4, Execution Plan remained the same.

The query itself is not my concern, but that there is such a wide difference in performance between 2 boxes. Is there a setting on the Production that might be slowing it down? My system seems to have selected a worktable, is there a way I can turn that off to mimic the server?

I want to optimize the queries to perform best on the Production server, which is hard to do if my server's chosing different execution plans.

Is there anyway to determine why the Production machine would choose one plan vs. the development machine?
Development:
|--Nested Loops(Inner Join, OUTER REFERENCES:([CLIENT].[CONSUMER_UUID]))
|--Index Spool(SEEK:([CLIENT].[FIRST_NAME]=Coffee.[FIRST_NAME] AND [CLIENT].[LAST_NAME]=Coffee.[LAST_NAME]))
| |--Clustered Index Scan(OBJECT:([SAMS2K_ND_STATE].[dbo].[CLIENT].[PK_CLIENT]))
|--Clustered Index Seek(OBJECT:([SAMS2K_ND_STATE].[dbo].[CONSUMER].[PK_CONSUMER]), SEEK:([CONSUMER].[CONSUMER_UUID]=[CLIENT].[CONSUMER_UUID]), WHERE:([CONSUMER].[DOB]=[C2].[DOB] AND [CONSUMER].[RES_TOWN_NAME]=[C2].[RES_TOWN_NAME]) ORDERED FORWARD)

Production:
|--Nested Loops(Inner Join, OUTER REFERENCES:([CLIENT].[CONSUMER_UUID]))
|--Clustered Index Scan(OBJECT:([SAMS2K_ND_STATE].[dbo].[CLIENT].[PK_CLIENT]), WHERE:([CLIENT].[FIRST_NAME]=Coffee.[FIRST_NAME] AND [CLIENT].[LAST_NAME]=Coffee.[LAST_NAME]))
|--Clustered Index Seek(OBJECT:([SAMS2K_ND_STATE].[dbo].[CONSUMER].[PK_CONSUMER]), SEEK:([CONSUMER].[CONSUMER_UUID]=[CLIENT].[CONSUMER_UUID]), WHERE:([CONSUMER].[DOB]=[C2].[DOB] AND [CONSUMER].[RES_TOWN_NAME]=[C2].[RES_TOWN_NAME]) ORDERED FORWARD)

The following is the output of the statistics on my Local Development
Server:
Table 'CONSUMER'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0.
Table 'CLIENT'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0.
SQL Server Execution Times:
CPU time = 3187 ms, elapsed time = 1877 ms.
This is the output of the statistics on the Production environment:
Table 'CONSUMER'. Scan count 88857, logical reads 274907, physical reads 0, read-ahead reads 0.
Table 'CLIENT'. Scan count 41708, logical reads 48294390, physical reads 0, read-ahead reads 0.
SQL Server Execution Times:
CPU time = 690218 ms, elapsed time = 225583 ms.

Have you checked that the statistics are equal for the indexes on both servers?

Look for DBCC SHOW_STATISTICS in the BOL.
|||

Try and rebuild the Indexes on Production Database

DBCC DBREINDEX

|||

The databases were identical, both restored from the same backup resources on the same day.

Thank you for your responses, and I will attempt both, but I don't believe that the statistics or indexes would be different if drawn from the same resource?

|||This is not due to statistics or indexes. It should be identical when you restore from a backup of a database. There are other factors that can affect the query plan - number of processors, memory, maxdop settings etc. Additionally you should try to keep the service pack level the same between the machines. The main reason is that you could get different plans due to changes in service pack and some could be bugs also. So it is best to compare servers that are on the same service pack too. If you are seeing worse performance than SP3 for the same query/machine configuration then it is most probably a bug. I would encourage you to create a bug using the MSDN Product Feedback Center with the repro steps. Also, please use SET STATISTICS PROFILE to check the actual query plan/execution times.|||

It should indeed be identical when you restore but a colleague here has experienced some strange behavior after a restore several times. He claims that after an update of the statistics everything functioned as expected. Now I can't say I fully support this statement as I have no personal experience with this.

We tend to keep our service packs identical to production, I think this should be a rule for every development environment.

You should update your weblog Uma, you still have to answer my NOLOCK question :-)

|||

Thanks for your reply,

The production is running SP3a. I tested the query on development environment at both RTM and SP4 (not realizing the production was still only at SP3) and I did get the same plans between RTM and SP4.

The production machine is Windows 2003 server dual processor,Intel Xeon 3.6ghz, with 3.5 GB Memory, it is part of a cluster server.
The development machine is Windows XP dual processor, Intel Pentium 4 2.6ghz, with 1 GB Memory.

I will compare the profiles in the am.

Different execution plans - same data, same server

Hi there - hoping someone can help me here!

I have a database that has been underperforming on a number of queries
recently - in a test environment they take only a few seconds, but on
the live data they take up to a minute or so to run. This is using the
same data.

Every evening a copy of the live data is copied to a backup 'snapshot'
database on the same server and also, on this copy the queries only
take a second or so to run. (This is testing through the Query
Analyser)

I've studied the execution plans for the same query on the snapshot db
and the live db and they seem to be significantly different - why is
this? it's looking at the same data and exactly the same code!!

Anybody got any ideas?Was the 'snapshot' created with BACKUP/RESTORE or some other ETL technique?
If not BACKUP/RESTORE, it may be that statistics are different, resulting in
different plans. You might try updating stats on your live database

--
Hope this helps.

Dan Guzman
SQL Server MVP

"James Walker" <james@.jimw.co.uk> wrote in message
news:8228bccf.0412080431.793b31e3@.posting.google.c om...
> Hi there - hoping someone can help me here!
> I have a database that has been underperforming on a number of queries
> recently - in a test environment they take only a few seconds, but on
> the live data they take up to a minute or so to run. This is using the
> same data.
> Every evening a copy of the live data is copied to a backup 'snapshot'
> database on the same server and also, on this copy the queries only
> take a second or so to run. (This is testing through the Query
> Analyser)
> I've studied the execution plans for the same query on the snapshot db
> and the live db and they seem to be significantly different - why is
> this? it's looking at the same data and exactly the same code!!
> Anybody got any ideas?|||"Copy of the Live Data" - as in backup/restore? I'd update statistics
on both - then check the execution plan. I've seen the "wrong" plan
taken becuase statistics were up to date - opposite of what you would
expect.|||Thanks for all your help people - i did:

UPDATE STATISTICS tablename

for all the dependent tables involved and its now fixed it and is
working nice and fast again...

one question still however, is why i've needed to do this?? Shouldn't
it automatically keep things like this up to date?