Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
mctaff
Starting Member
13 Posts |
Posted - 2010-01-16 : 11:59:30
|
Hi,I'm running the folliwng query on a table with 1 million plus rows...select top 1 @open=[open], @date=[date] from FTS.dbo.GBPJPY1 where [date] > @firstPositionOpen and [low] < @firstPositionStopLoss order by [date] After 5 mins I killed the query....If I comment out either of the two where conditions it returns in 2 seconds!Any ideas on how I can keep both conditions but retain the performance? |
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2010-01-16 : 12:16:22
|
Make sure you have a covering index over columns Date and Low with column open as included.CREATE NONCLUSTERED INDEX IX_Peso ON dbo.GBPJPY1 (Date, Low) INCLUDE (Open) Create this index and retry your query and report back (post) your new timing. N 56°04'39.26"E 12°55'05.63" |
|
|
Kristen
Test
22859 Posts |
Posted - 2010-01-16 : 12:37:07
|
Interesting. Will that one index will cover both criteria?I reckon you may have to put [...] around those reserve words though ...CREATE NONCLUSTERED INDEX IX_Peso ON dbo.GBPJPY1 ([Date], [Low]) INCLUDE ([Open]) |
|
|
mctaff
Starting Member
13 Posts |
Posted - 2010-01-16 : 12:51:03
|
Just noticed I'm posting these in the wrong forum! Using SQL Server 08Also I didn't point out, the code is being executed multiple times in a loop - 1000s of times... this clearly hampers the performance!Minutes in now with the non clustered index....I think I may be approaching my problem in the wrong way... I need to sequentailly process data from two data streams - a one min sample and a fifteen min sample... I've created a view over the fifteen min data, to reduce it to a sample size of approx 5000, from 100,000+. The one min data is 1 million plus rows.For every entry in the fifteen min sample, I then want to find the subsequent (by date) entry in the one min data that fits the criteria above - ie 'date' > given date and 'low' < given value - both these values are dynamic, and will change for every entry in the fifteen min sample, which is why I haven't created a second view for this. Each run through the fifteen min data will chronologically move through time - so towards the end of the run I'd only need to be looking at the last portion of the one min data.Any ideas on a better approach? |
|
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2010-01-16 : 15:50:00
|
You could post some sample data and DDL to illustrate your conundrum.And also your expected output. N 56°04'39.26"E 12°55'05.63" |
|
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2010-01-16 : 16:19:49
|
Is this related to the moving average problem? N 56°04'39.26"E 12°55'05.63" |
|
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2010-01-17 : 03:44:43
|
quote: Originally posted by mctaff ...so towards the end of the run I'd only need to be looking at the last portion of the one min data.Any ideas on a better approach?
Something likeSELECT DATEADD(MINUTE, DATEDIFF(MINUTE, 0, DateTimeColumn) / 15 * 15, 0) AS DateTimeInterval, MAX(DateTimeColumn) AS DateTimeColumnFROM Table1GROUP BY DATEADD(MINUTE, DATEDIFF(MINUTE, 0, DateTimeColumn) / 15 * 15, 0) N 56°04'39.26"E 12°55'05.63" |
|
|
mctaff
Starting Member
13 Posts |
Posted - 2010-01-17 : 06:18:44
|
Thanks for the interest and assistance Peso.Moving average is related, but that's about preprocessing the data before analysis - managed to sort that one out...I'll grab some sample data etc and post later...Cheers,David. |
|
|
|
|
|
|
|