Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
BlakeK
Starting Member
10 Posts |
Posted - 2005-06-28 : 09:50:51
|
I just ran into the following problem using SQL2000 SP3...I have a table that has two fields as the primary key, an integer and a datetime field.I executed the following in ISQLW and got a duplicate key error...INSERT INTO tbl_events(product_id, event_time)VALUES(243, '06/27/2005 06:23:39:704')After I got the duplicate key error I did the following SELECT from the table to check the existing data...SELECT * FROM tbl_eventsWHERE product_id = 243Which returned the following rows...2005-06-27 06:23:39.7032005-06-27 06:23:39.5772005-06-27 06:23:23.8132005-06-26 11:51:57.5772005-06-26 11:51:57.453None of those rows match what I am trying to insert!The one row is within a millisecond of the one I am inserting, but it is not a duplicate.So, I tried increasing the value I was trying to insert by one millisecond to see what happened...INSERT INTO tbl_events(product_id, event_time)VALUES(243, '06/27/2005 06:23:39:705')Result?(1 row(s) affected)Now here's where it gets real strange...So I do another select of everything for that product and here's what I get...2005-06-27 06:23:39.7072005-06-27 06:23:39.7032005-06-27 06:23:39.5772005-06-27 06:23:23.8132005-06-26 11:51:57.5772005-06-26 11:51:57.453WTF?! I inserted a date with a millsecond of "705" and now a row with "707" is in the table!And when I try to insert a "704" which is obviously not there, SQL is giving me a duplicate key row error.Why is SQL not properly handling dates to the millisecond precision?Any help or ideas would be greatly appreciated.  |
|
|
Bustaz Kool
Master Smack Fu Yak Hacker
1834 Posts |
Posted - 2005-06-28 : 10:06:01
|
| Can we see your table definition statement (CREATE TABLE...)?HTH================================================================='Tis with our judgements as our watches: none Go just alike, yet each believes his own. -Alexander Pope, poet (1688-1744) |
 |
|
|
BlakeK
Starting Member
10 Posts |
Posted - 2005-06-28 : 10:15:29
|
Well, I found something, and it's not good news at all!According to the following article, SQL Server does not properly store datetime information to the millisecond!!!!http://www.windowsitpro.com/Articles/Print.cfm?ArticleID=15574About half way down the page you will find the information about how SQL Server is only precise to one three-hundredth of a second, and rounds milliseconds accordingly.That is very bad for me. We have machines that are spitting out log files with information accurate to the millisecond for tracking our customer's machine activity. I am supposed to store all these log files in a SQL server table, and that isn't looking good with SQL Server. |
 |
|
|
madhivanan
Premature Yak Congratulator
22864 Posts |
Posted - 2005-06-28 : 10:23:06
|
Yes, as specified in that article it rounds to 0,7 or 7declare @t table(i int, d datetime)declare @d datetimeset @d='12/12/2005 12:12:12:201'insert into @t values(1,'12/12/2005 12:12:12:201')insert into @t values(1,'12/12/2005 12:12:12:202')insert into @t values(1,'12/12/2005 12:12:12:203')insert into @t values(1,'12/12/2005 12:12:12:204')insert into @t values(1,'12/12/2005 12:12:12:205')insert into @t values(1,'12/12/2005 12:12:12:206')insert into @t values(1,'12/12/2005 12:12:12:207')insert into @t values(1,'12/12/2005 12:12:12:208')insert into @t values(1,'12/12/2005 12:12:12:209')insert into @t values(1,'12/12/2005 12:12:12:210')insert into @t values(1,@d)select * from @t MadhivananFailing to plan is Planning to fail |
 |
|
|
BlakeK
Starting Member
10 Posts |
Posted - 2005-06-28 : 10:29:44
|
Found another article that supports it:http://www.sql-server-performance.com/bs_date_time.aspI can't believe SQL server would store data to the millisecond, but not be ACCURATE to the millisecond!That is so ridiculous. It should either be accurate, or only allow you to store to the hundredth.Does anyone know if SQL 2005 will address this?Does anyone know of a "work around" to store this kind of data? Someone else out there has to be in a SQL environment and needed to store datetime info accurate to the millisecond.I guess I could store the millisecond off in an integer field and have the primary key wrap 3 fields, like...INSERT INTO tbl_events(product_id, event_time, millisecond)VALUES(243, '06/27/2005 06:23:39:70', 4)Not exactly pretty, and I'm not sure how horribly that would make SELECT retreivals of information based on the date fields.Oh, this is a mess... |
 |
|
|
BlakeK
Starting Member
10 Posts |
Posted - 2005-06-28 : 10:55:03
|
| Another thought I had...Is there anyway to convert the data to a number and store it accurately in something like a "real" field?If I had a string with a date accurate to a millisecond '2005-06-27 06:23:39.707', how could I convert that to a "real"? And is "real" or anything else large enough to store it accurately? |
 |
|
|
X002548
Not Just a Number
15586 Posts |
Posted - 2005-06-28 : 12:19:33
|
How did you get that millisecond value in the first place?I believe it has something to do with the cycles on the computer itself. Arnold pointed this one out to us a while ago.I'll try to look for that and post the link here.If you're doing singleton INSERTS, why not just default the event_time column to GetDate()? And have your code have a retry logic.Or you can do what you proposed.USE NorthwindGOSET NOCOUNT ONCREATE TABLE tbl_events99( product_id int , event_time datetime , event_ms int , PRIMARY KEY (product_id, event_time, event_ms))GOINSERT INTO tbl_events99(product_id, event_time, event_ms)SELECT 243, LEFT('2005-06-27 06:23:39.703',19), RIGHT('2005-06-27 06:23:39.703',3) UNION ALLSELECT 243, LEFT('2005-06-27 06:23:39.577',19), RIGHT('2005-06-27 06:23:39.577',3) UNION ALLSELECT 243, LEFT('2005-06-27 06:23:23.813',19), RIGHT('2005-06-27 06:23:23.813',3) UNION ALLSELECT 243, LEFT('2005-06-26 11:51:57.577',19), RIGHT('2005-06-26 11:51:57.577',3) UNION ALLSELECT 243, LEFT('2005-06-26 11:51:57.453',19), RIGHT('2005-06-26 11:51:57.453',3)GOINSERT INTO tbl_events99 (product_id, event_time, event_ms)VALUES (243, LEFT('06/27/2005 06:23:39:704',19),RIGHT('06/27/2005 06:23:39:704',3))GOSELECT * FROM tbl_events99GOSET NOCOUNT ONDROP TABLE tbl_events99GOBrett8-)Hint: Want your questions answered fast? Follow the direction in this linkhttp://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx |
 |
|
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2005-06-28 : 12:25:08
|
Dates are stored in 8 bytes in SQL Server, so I am sure you could store your data in a numeric column.I would suggest numeric(15,3). This would use 9 bytes of storage and allow you to store the range of dates from 1753/1/1 through 9999/12/31 in seconds (the same date range that SQL Server datetime covers), with three digit accuracy on the milliseconds. Everthing to the left of the decimal point would be seconds and milliseconds would be to the right. This would also make it easy to convert to a SQL Server datetime value, bacause this is very close to the way SQL Server stores its dates.I would stay away from REAL or FLOAT. They are represented internally a binary numbers, so there is not always an exact conversion to decimal numbers. They are more suited for sceintific applications. For example:select Test = .9eTest------------------- 0.90000000000000002 You should always read the documentation, especially SQL Sever Books Online, before you start developing.CODO ERGO SUM |
 |
|
|
BlakeK
Starting Member
10 Posts |
Posted - 2005-06-28 : 12:28:22
|
quote: Originally posted by X002548How did you get that millisecond value in the first place?
We have a program written in C that records event on a computer to a text file.These text files are gathered from multiple machines and brought back to our corporate site for processing.The text files are then parsed through line by line and imported into SQL.The program that creates the files is accurate to the millisecond.I found an article that says this problem will NOT be addressed with the release of SQL 2005!I also found that IBM's DB2 is accurate to the microssecond (6 places) and Oracle 9i is accurate to the nanosecond (9 places).How can MS not address the issue of SQL not even being accurate to the millisecond when the other big players do? |
 |
|
|
X002548
Not Just a Number
15586 Posts |
Posted - 2005-06-28 : 12:38:51
|
| Don't take this personal (and this is in no way to defend M$), but I don't like your Primary key in the first place. Can you post the DDL of the table?And how much data are we talking about?Doing singleton inserts from flat files from disparite systems sounds like the looooooong way around the mountain.You should bcp the data in.Brett8-)Hint: Want your questions answered fast? Follow the direction in this linkhttp://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx |
 |
|
|
BlakeK
Starting Member
10 Posts |
Posted - 2005-06-28 : 13:00:31
|
quote: Originally posted by X002548 Don't take this personal (and this is in no way to defend M$), but I don't like your Primary key in the first place. Can you post the DDL of the table?And how much data are we talking about?Doing singleton inserts from flat files from disparite systems sounds like the looooooong way around the mountain.You should bcp the data in.
No offense taken. I'm interested to hear what exactly you don't like about the primary key. I'm always open to a better way of doing things.What I posted was actually a simplified version of the actual table structure, although the key elements remain.I thought if I posted the full tabel, it may take too long to exaplin everything.And the reason why we currently do line by line processing instead of a BCP is because lookups and new inserts to other tables may have to be made based on the data in the file.Maybe this can be done with BCP, I don't know. I haven't really worked with BCP beyond simple inserts from a file to a single table.I don't know if this will cause more confusion that trying to simplify the table, but here is a script for creating the actual table, the primary key, and an index on the table.CREATE TABLE [dbo].[tbl_insite_feu_radio_data_points] ( [feu_radio_id] [int] NOT NULL , [record_time] [datetime] NOT NULL , [server_radio_id] [int] NULL , [server_install_id] [int] NULL , [insite_feu_id] [int] NULL , [port_number] [int] NULL , [current_state] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [last_response] [bit] NULL , [last_response_time] [datetime] NULL , [csafe_command] [char] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [csafe_data_1] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [csafe_data_2] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [csafe_data_3] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [insite_message_id] [int] NULL ) ON [PRIMARY]GOALTER TABLE [dbo].[tbl_insite_feu_radio_data_points] WITH NOCHECK ADD CONSTRAINT [PK_tbl_feu_radio_data_points] PRIMARY KEY CLUSTERED ( [feu_radio_id], [record_time] ) WITH FILLFACTOR = 90 ON [PRIMARY] GO CREATE INDEX [IX_tbl_insite_feu_radio_data_points] ON [dbo].[tbl_insite_feu_radio_data_points]([server_install_id]) WITH FILLFACTOR = 90 ON [PRIMARY]GO The table currently contains 40,564,939 rows.Processing the files and inserting them is fairly fast (at least fast enough for the needs of the system).And selects from the table are instant.The reason the primary key is the combination of the feu_radio_id integer and the record_time datetime field is that we have multiple events which occur on each client (feu_radio) and the only thing that makes it unique is the time (to the millisecond) in which it occurs. The same client can have the saem event happen numerous times, and various events happen within milliseconds of each other.Up until this time, events were only occuring every tenth or maybe hundredth of a second, but now the frequency has increased and we can have events every millisecond.I am not sure what else you would use as the primary key in this situation, but I'd be interested in hearing what your thoughts are. Thanks. |
 |
|
|
X002548
Not Just a Number
15586 Posts |
Posted - 2005-06-28 : 14:30:17
|
| Whats an event? Is it likely that the same event could happen within the same millisecond. The reason I don't like it is because you will never use the datetime as a predicate I wouldn't think...plus I;m suprised you don't get hot spots on your database. Maybe that's why you do singleton inserts? Also, you incur a lot of logging, where a bcp would be minimally logged.Can you show us an example of a SELECT Statement that's used to retrieve data from this table?Brett8-)Hint: Want your questions answered fast? Follow the direction in this linkhttp://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx |
 |
|
|
BlakeK
Starting Member
10 Posts |
Posted - 2005-06-28 : 14:39:44
|
quote: Originally posted by X002548 Whats an event? Is it likely that the same event could happen within the same millisecond. The reason I don't like it is because you will never use the datetime as a predicate I wouldn't think...plus I;m suprised you don't get hot spots on your database. Maybe that's why you do singleton inserts? Also, you incur a lot of logging, where a bcp would be minimally logged.Can you show us an example of a SELECT Statement that's used to retrieve data from this table?
We have radios that report data back to servers.There could be mutliple events that occur in the same millisecond, but they would be from different radios.Each radio can only report one data event per millisecond.A typical select for reporting would be:SELECT current_stateFROM tbl_insite_feu_radio_data_pointsWHERE feu_radio_id = 243AND record_time >= '1/1/2005'AND record_time < '2/1/2006'I am not sure what you mean by "hot spots". |
 |
|
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2005-06-28 : 17:42:42
|
Here is another idea for a workaround on this. Convert column [record_time] to smalldatetime, and store the millseconds in a new column, [record_milliseconds]. This would let you continue to do lookups by date, and would actually use less storage, because the 4 byte smalldatetime and 2 byte samllint would be 2 bytes less than the 8 byte datetime. You would have to add the new column [record_milliseconds] to your primary key to make it unique.CREATE TABLE [dbo].[tbl_insite_feu_radio_data_points] ( [feu_radio_id] [int] NOT NULL , [record_time] [smalldatetime] NOT NULL , [record_milliseconds] [smallint] not null [server_radio_id] [int] NULL , ... rest of table ...)ALTER TABLE [dbo].[tbl_insite_feu_radio_data_points]WITH NOCHECK ADD CONSTRAINT [PK_tbl_feu_radio_data_points]PRIMARY KEY CLUSTERED ( [feu_radio_id], [record_time], [record_milliseconds] ) WITH FILLFACTOR = 90 ON [PRIMARY] CODO ERGO SUM |
 |
|
|
BlakeK
Starting Member
10 Posts |
Posted - 2005-06-28 : 18:58:25
|
quote: Originally posted by Michael Valentine Jones Here is another idea for a workaround on this. Convert column [record_time] to smalldatetime, and store the millseconds in a new column, [record_milliseconds]. This would let you continue to do lookups by date, and would actually use less storage, because the 4 byte smalldatetime and 2 byte samllint would be 2 bytes less than the 8 byte datetime. You would have to add the new column [record_milliseconds] to your primary key to make it unique.CREATE TABLE [dbo].[tbl_insite_feu_radio_data_points] ( [feu_radio_id] [int] NOT NULL , [record_time] [smalldatetime] NOT NULL , [record_milliseconds] [smallint] not null [server_radio_id] [int] NULL , ... rest of table ...)ALTER TABLE [dbo].[tbl_insite_feu_radio_data_points]WITH NOCHECK ADD CONSTRAINT [PK_tbl_feu_radio_data_points]PRIMARY KEY CLUSTERED ( [feu_radio_id], [record_time], [record_milliseconds] ) WITH FILLFACTOR = 90 ON [PRIMARY] CODO ERGO SUM
How funny! LOL!This is exactly what I ended up doing!I even named the field the same thing!Hopefully I won't ever have to do reporting based on ranges of milliseconds. |
 |
|
|
|
|
|
|
|