Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 Transact-SQL (2000)
 Select performance with multiple where conditions

Author  Topic 

mctaff
Starting Member

13 Posts

Posted - 2010-01-16 : 11:59:30
Hi,

I'm running the folliwng query on a table with 1 million plus rows...


select top 1 @open=[open], @date=[date]
from FTS.dbo.GBPJPY1
where [date] > @firstPositionOpen
and [low] < @firstPositionStopLoss
order by [date]


After 5 mins I killed the query....

If I comment out either of the two where conditions it returns in 2 seconds!

Any ideas on how I can keep both conditions but retain the performance?

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2010-01-16 : 12:16:22
Make sure you have a covering index over columns Date and Low with column open as included.

CREATE NONCLUSTERED INDEX IX_Peso ON dbo.GBPJPY1 (Date, Low) INCLUDE (Open)


Create this index and retry your query and report back (post) your new timing.




N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-01-16 : 12:37:07
Interesting. Will that one index will cover both criteria?

I reckon you may have to put [...] around those reserve words though ...

CREATE NONCLUSTERED INDEX IX_Peso ON dbo.GBPJPY1 ([Date], [Low]) INCLUDE ([Open])
Go to Top of Page

mctaff
Starting Member

13 Posts

Posted - 2010-01-16 : 12:51:03
Just noticed I'm posting these in the wrong forum! Using SQL Server 08

Also I didn't point out, the code is being executed multiple times in a loop - 1000s of times... this clearly hampers the performance!

Minutes in now with the non clustered index....


I think I may be approaching my problem in the wrong way... I need to sequentailly process data from two data streams - a one min sample and a fifteen min sample... I've created a view over the fifteen min data, to reduce it to a sample size of approx 5000, from 100,000+. The one min data is 1 million plus rows.

For every entry in the fifteen min sample, I then want to find the subsequent (by date) entry in the one min data that fits the criteria above - ie 'date' > given date and 'low' < given value - both these values are dynamic, and will change for every entry in the fifteen min sample, which is why I haven't created a second view for this. Each run through the fifteen min data will chronologically move through time - so towards the end of the run I'd only need to be looking at the last portion of the one min data.


Any ideas on a better approach?

Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2010-01-16 : 15:50:00
You could post some sample data and DDL to illustrate your conundrum.
And also your expected output.



N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2010-01-16 : 16:19:49
Is this related to the moving average problem?



N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2010-01-17 : 03:44:43
quote:
Originally posted by mctaff

...so towards the end of the run I'd only need to be looking at the last portion of the one min data.

Any ideas on a better approach?
Something like

SELECT DATEADD(MINUTE, DATEDIFF(MINUTE, 0, DateTimeColumn) / 15 * 15, 0) AS DateTimeInterval,
MAX(DateTimeColumn) AS DateTimeColumn
FROM Table1
GROUP BY DATEADD(MINUTE, DATEDIFF(MINUTE, 0, DateTimeColumn) / 15 * 15, 0)


N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

mctaff
Starting Member

13 Posts

Posted - 2010-01-17 : 06:18:44
Thanks for the interest and assistance Peso.

Moving average is related, but that's about preprocessing the data before analysis - managed to sort that one out...

I'll grab some sample data etc and post later...


Cheers,

David.
Go to Top of Page
   

- Advertisement -