Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 Indexed view performance.

Author  Topic 

slacker
Posting Yak Master

115 Posts

Posted - 2004-09-23 : 00:04:11
I have a situation where im considering to use an indexed view. I have to perform multiple joins to a table that contains over 50,000 rows which will never update. Plus it performs a couple of other joins. Would it make sense to use an indexed view? A part of me wants to just denormalize the data. Viewing the execution plan shows that the joins are tripling the cost of the query. I suppose the only way to know for sure is to test it but I havent used them before. Thanks for your time.

Kristen
Test

22859 Posts

Posted - 2004-09-23 : 04:08:23
Have you already got indexes on the columns on BOTH sides of the JOINs?

If not that would be my first test. I don't see why a couple of joins would make a "reasonable" query "unreasonable", performance-wise

Kristen
Go to Top of Page

slacker
Posting Yak Master

115 Posts

Posted - 2004-09-23 : 07:48:23
Yeah the index's are there. Plus the foreign key atm only has about 10 records in it. But will grow. The query has 4 joins. 2 of which are to the same table which has over 50k records. which really isnt that much but still. That table never changes. 2 of the tables in the join will have frequent inserts and updates.
Go to Top of Page

Seventhnight
Master Smack Fu Yak Hacker

2878 Posts

Posted - 2004-09-23 : 08:10:32
frequent inserts and deletes do not play well with indexed joins... so i read...

[from BOL]
quote:
Indexed views can be more complex to maintain than indexes on base tables. You should create indexes only on views where the improved speed in retrieving results outweighs the increased overhead of making modifications. This usually occurs for views mapped over relatively static data, that process many rows, and are referenced by many queries.

[/from BOL]


Corey
Go to Top of Page

slacker
Posting Yak Master

115 Posts

Posted - 2004-09-23 : 08:39:54
Ok. That makes sense. one specific table in the join will be updated all the time. and the other will update a bit too. So forget the indexed view. Something occured to me... I could hit up this data when a user logs in and store the join data in a cookie :).

Muhahahahaha. :D

Thanks for the help guys.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-09-23 : 10:26:32
I'm still not comfortable that the JOINs make it slow; there might be some other reason.

Is it worth posting some code?

Kristen
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2004-09-23 : 10:45:26
quote:
Originally posted by slacker

I have to perform multiple joins to a table that contains over 50,000 rows



50k?

Post the DDL and the SQL with some sample data..this should be sub second...



Brett

8-)
Go to Top of Page

Seventhnight
Master Smack Fu Yak Hacker

2878 Posts

Posted - 2004-09-23 : 11:42:40
I thought that this part was the relevant part.
quote:
...2 of the tables in the join will have frequent inserts and updates.

It wouldn't be able to use indexes on these 2 tables, and it may have the tables or records locked depending on the application...

Corey
Go to Top of Page

slacker
Posting Yak Master

115 Posts

Posted - 2004-09-23 : 12:38:22
Ive already rewritten the code to make it faster.. by eliminating teh users zip code from the join.. it didnt need to be in there anyways i could have just selected it once into some variables. but this is how it was when i originally asked the question



create procedure GetSeminars
(
@zip int
)

select
SeminarID
SeminarName,
Cost,
Instructor,
dbo.GeoCodeDistance( zip1.latitude, zip1.longitude, zip2.latitude, zip2.longitude ) as distance
FROM
Seminars
INNER JOIN Classes ON ClassID = Seminars.NextClass
INNER JOIN Addresses ON Seminars.PrimaryAddress = AddressID
INNER JOIN ZipCodes zip1 ON zip1.Zip = Addresses.Zip
INNER JOIN ZipCodes zip2 ON zip2.Zip = @zip

Go to Top of Page

Seventhnight
Master Smack Fu Yak Hacker

2878 Posts

Posted - 2004-09-23 : 12:40:28
calling a function might not be that great... can you add the calculations into the view itself??

Corey
Go to Top of Page

slacker
Posting Yak Master

115 Posts

Posted - 2004-09-23 : 12:43:28
quote:
Originally posted by Seventhnight

calling a function might not be that great... can you add the calculations into the view itself??

Corey



Yeah I can do that. Are udf's typically slower then inlining code? its just a little bit of math... i got this formula from a free php script.

SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

ALTER function GeoCodeDistance
-- GeoCodeDistance
-- Written By: slacker ( formula retrieved from web )
-- calculates the distance ( in miles ) between two latitude
-- and longitude sets.
(
@lat1 decimal(18,6),
@long1 decimal(18,6),
@lat2 decimal(18,6),
@long2 decimal(18,6)
)
returns decimal(18,2)
as
begin

RETURN

(DEGREES(ACOS(
SIN(RADIANS(@lat1)) *
SIN(RADIANS(@lat2)) +
COS(RADIANS(@lat1)) *
COS(RADIANS(@lat2)) *
COS(RADIANS(@long1 - @long2))
)
)
) * 69.09

end

GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO



Go to Top of Page

Seventhnight
Master Smack Fu Yak Hacker

2878 Posts

Posted - 2004-09-23 : 12:46:59
in my experience, if I am trying to make a query run faster, one of the first things I do is make sure all of the UDFs that can be done inline, are.

Their probably fine for a single record, or even a few, but a large recordset will slow down significantly b/c of them

Corey
Go to Top of Page

slacker
Posting Yak Master

115 Posts

Posted - 2004-09-23 : 12:49:30
quote:
Originally posted by Seventhnight

in my experience, if I am trying to make a query run faster, one of the first things I do is make sure all of the UDFs that can be done inline, are.

Their probably fine for a single record, or even a few, but a large recordset will slow down significantly b/c of them

Corey



Oh sweet dude. Nice tip. I was only going to have it that one query anyways.
Go to Top of Page

Seventhnight
Master Smack Fu Yak Hacker

2878 Posts

Posted - 2004-09-23 : 12:55:18
You know what you could do...

You could build the list of zips that are within a range and make it a static indexed table... instead of doing the calculation repeatedly... then right your view using the static indexed table and it would be lightening


so maybe any two zips less then 100 miles or 150 miles you store a match...ie:


zip1 zip2 distance
------ ------ --------
00001 00002 25
00001 00003 89
00002 00001 25
00003 00001 89
00003 00004 62



Corey
Go to Top of Page

slacker
Posting Yak Master

115 Posts

Posted - 2004-09-23 : 12:59:04
Too many combinations. billions.
Go to Top of Page

Seventhnight
Master Smack Fu Yak Hacker

2878 Posts

Posted - 2004-09-23 : 13:08:30
billions??? I don't think so.

You said you have 50,000 records. Cross Join that is billions, but you only want ones that are within say 100 miles.
Assume there are an average of 100 zipcodes within 100 miles of another (that should be an overestimate). That gives 5 million recors. I would bet the average is much much less than 100, probably closer to 20ish... which would give a million...


just thoughts anyway...

Corey
Go to Top of Page

slacker
Posting Yak Master

115 Posts

Posted - 2004-09-23 : 13:19:46
quote:
Originally posted by Seventhnight

billions??? I don't think so.

You said you have 50,000 records. Cross Join that is billions, but you only want ones that are within say 100 miles.
Assume there are an average of 100 zipcodes within 100 miles of another (that should be an overestimate). That gives 5 million recors. I would bet the average is much much less than 100, probably closer to 20ish... which would give a million...


just thoughts anyway...

Corey



Ahh i see what you mean. I dont actually need anything > 100mi. Sorry up all night drinking coffee my brain is starting to fart.
Go to Top of Page

Seventhnight
Master Smack Fu Yak Hacker

2878 Posts

Posted - 2004-09-23 : 13:33:46
yeah... it still may be too many, but it was worth a thought...

take a nap

Corey
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-09-24 : 04:29:04
Can you do the calculation at the client? All that trigonometry has got to take a while, hasn't it?

Other than that my thoughts are to prefix all tables with "dbo." (its supposed to be faster, my tests have been inconclusive, but), and to only do the Trig. calculation for ZIP2 once - plus inline the code as Seventhnight said. Note that in removing the ZIP2 JOIN (which you said you have already done) I have calculated it straight to the SIN/COS used in the Geo-distance formula. Its probably obvious, but I mention it anyway!

create procedure dbo.GetSeminars
(
@zip int
)


DECLARE @SIN_zip2_latitude sometype,
@COS_zip2_latitude sometype,
@zip2_longitude sometype

SELECT @SIN_zip2_latitude = SIN(RADIANS(zip2.latitude)),
@COS_zip2_latitude = COS(RADIANS(zip2.latitude)),
@zip2_longitude = zip2.longitude
FROM dbo.ZipCodes zip2
WHERE zip2.Zip = @zip




select
SeminarID
SeminarName,
Cost,
Instructor,
dbo.GeoCodeDistance( zip1.latitude, zip1.longitude, zip2.latitude, zip2.longitude ) as distance
(DEGREES(ACOS(
SIN(RADIANS(zip1.latitude)) *
@SIN_zip2_latitude +
COS(RADIANS(zip1.latitude)) *
@COS_zip2_latitude *
COS(RADIANS(zip1.longitude - @zip2_longitude))
)
)
) * 69.09 AS distance

FROM dbo.Seminars
INNER JOIN dbo.Classes
ON ClassID = Seminars.NextClass
INNER JOIN dbo.Addresses
ON Seminars.PrimaryAddress = AddressID
INNER JOIN dbo.ZipCodes zip1
ON zip1.Zip = Addresses.Zip
INNER JOIN ZipCodes zip2
ON zip2.Zip = @zip


Edit: Just to confirm: you HAVE got indexes/PK on all of:
Seminars.PrimaryAddress
Seminars.NextClass
Classes.ClassID
Addresses.AddressID
Addresses.Zip
ZipCodes.Zip

Kristen
Go to Top of Page

slacker
Posting Yak Master

115 Posts

Posted - 2004-09-24 : 04:41:01
quote:
Originally posted by Kristen

Can you do the calculation at the client? All that trigonometry has got to take a while, hasn't it?



I was actually suprised how well sql server handled the trig part as that was my first concern. The compute scalar cost was minimal. Most of the speed issue's were from the joins. So yeah basically I eliminated one of the zip joins ( im actually going to cache the lat,longitudes for the user in a cookie or some other form. ) and its all indexed. Running smoothly now. The ultimate test will be when we have a bunch of seminars. Im going to leave it as is for now. Thanks for the help.

And just to clarify... the query wasnt running slow, slow. I just noticed in the trace window that with the joins it tripled the cost. if i entered lat, longitude codes directly without the joins it was 3 times faster. That doesnt mean that it was crawling. I was just trying to optimize as much as possible. And I thank you again for helping me do this.
Go to Top of Page
   

- Advertisement -