Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 Database Design and Application Architecture
 high volume inserts per second

Author  Topic 

mike123
Master Smack Fu Yak Hacker

1462 Posts

Posted - 2008-12-17 : 06:10:22
Hi,

I'm creating an application that will INSERT into a logging table. We are inserting everytime an webserver hit is generated, and I am worried about the amount of connections created etc, for all these updates.

Running sql2k8/IIS7/ASP.NET

Whats a reasonable number of inserts I can achieve having everything on one box ?

Is 1000 per second too high of a figure ? I don't plan to go this high off the bat, but definately my design needs to be able to handle it. If it is in fact a very high requirement, what is likely my main bottleneck for inserts ?

Connections?
Disk I/O ?


Any suggestions greatly appreciated.

Thanks!
Mike123


jackv
Master Smack Fu Yak Hacker

2179 Posts

Posted - 2008-12-21 : 11:39:53
Could you send some details regarding the details of your tables - for example, are there indices?
How many CPUs have you got?
Are you using Full Recovery or Simple Recovery?
Could you post a typical INSERT statement?

Jack Vamvas
--------------------
http://www.ITjobfeed.com
Go to Top of Page

spirit1
Cybernetic Yak Master

11752 Posts

Posted - 2008-12-21 : 14:47:52
use SQL Server Service Broker for this.
it is a great async tool that will put all your messages into a queue and process them when it has resources available.
your app won't feel a thing.

In SQL Server 2008 you may also want to look into Change Data Capture or built-in auditing.


___________________________________________________________________________
Causing trouble since 1980
Blog: http://weblogs.sqlteam.com/mladenp
Speed up SSMS development: www.ssmstoolspack.com <- version 1.1 out!
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2008-12-21 : 17:46:15
Tell us about your IO config.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

mike123
Master Smack Fu Yak Hacker

1462 Posts

Posted - 2008-12-29 : 07:08:54
Hey Guys,

much appreciated..

Tara: No set IO config yet, should this be my main concern going forward? Currently just on a testing box with 1 IDE drive :(, which I definately don't plan to keep. Just finishing last portions of the application before moving to this phase.

Spirit1: Looked into your advice, I was unaware of SQL Server Service Broker, looks quite interesting. I have posted my SPROC here, I think perhaps its not a candidate because of return values. However perhaps theres a way around. I read an article on "SQL Server Auditing" and couldnt figure out how it would be applicable to this situation? Any guidance much appreciated !


jackv: Just 1 processor for now. Planning on having multiple, are inserts CPU intensive?
Regardin Simple VS Full Recovery, I am using whatever the default is in SQL2008. Sorry have never changed this option, should I be researching this more ?


Please see the SPROC below. I will post the table structure with indexes in the next post. You can see how it looks with this SP already tho (dbo.ImageLog)

Would love some constructive critism.

Thanks very much!
mike123





CREATE Procedure dbo.insert_ImageLog
(
@BannerID INT,
@UserID INT,
@DomainID INT = NULL,
@Status CHAR(1)
)


AS
BEGIN
IF @DomainID = 0
SET @DomainID = NULL

DECLARE @retValue INT
SET NOCOUNT ON;

INSERT INTO dbo.ImageLog
(
BannerID,
UserID,
DomainID,
Status,
UsageDate
)
VALUES
(
@BannerID,
@UserID,
@DomainID,
@Status,
GETDATE()
)

IF @@ERROR = 0
BEGIN
SET @retValue = 1
IF @Status = 'S'
BEGIN
DECLARE @BannerSize bigint
SELECT @BannerSize = BannerSize FROM dbo.Banner WHERE BannerID = @BannerID

UPDATE dbo.[User] SET BalanceCreditSize = BalanceCreditSize - @BannerSize WHERE UserID=@UserID
END
END
ELSE
SET @retValue = -1
END

SELECT @retValue
GO


Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2008-12-29 : 11:16:25
Yes IO would be my main concern if I had a system like this. Buy the fastest drives you can afford and configure them with the appropriate RAID. Typically we use RAID 5 for the MDF files and RAID 10 for the LDF files. You may want to consider using RAID 10 for the MDF files instead. I would probably only put one index on that table and have it be clustered and follow the order of the inserts. I would then perhaps have any select statements run against a copy of that table rather than on the actual table. If you do decide to run them both on the same table, then I would suggest using read committed snapshot as your isolation level so that reads don't block writes.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2008-12-29 : 12:02:35
You didn’t post your table definition for the ImageLog table. I would recommend that your table have an identity clustered primary key, probably with a data type of BIGINT, if the volume is going to be that high.

You might also run into issues with update you are doing, since updates can use more resources than inserts. You might consider deferring that update to batch it up based on ImageLog.UsageDate so that you don’t have to do an update with each transaction.

If you have concerns about the ability of the database server to keep up, you might consider using message queuing on the web servers to allow the insert information to be queued until the SQL Server can catch up.






CODO ERGO SUM
Go to Top of Page

mike123
Master Smack Fu Yak Hacker

1462 Posts

Posted - 2008-12-29 : 12:11:09
Sorry slipped my mind. Here is my current table:

Actually I am wondering if this uniqueIdentifier is the right datatype for the job here, I am thinking that BIGINT might be better. I can't remember the reasoning for choosing uniqueidentfier at the time ...

Is this something you advise against?

How would I go about deferring updates, are you talking about putting this logic in the front end application? or in SQL ? or is this what you mean by "message queueing" ( i am unfamiliar with, just skimming the topic now)

again much appreciated!

Thanks,
mike123


CREATE TABLE [dbo].[BannerUsage](
[BannerUsageID] [uniqueidentifier] NOT NULL,
[BannerID] [int] NULL,
[DomainID] [int] NULL,
[UsageDate] [datetime] NOT NULL,
[UserID] [int] NULL,
[Status] [char](1) NULL,
CONSTRAINT [PK_BannerUsage] PRIMARY KEY CLUSTERED
(
[BannerUsageID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2008-12-29 : 12:30:14
I would avoid using a UNIQUEIDENTIFIER as the primary clustered key. The problem is that values are generated at random all over the table, and will result in IO expensive page splits as values are added to full pages.

I would also avoid allowing nullable columns on any column that that doesn’t really need to allow nulls.

Even though it will use more resources, I would still put foreign key constraints on the BannerID, DomainID, UserID, and Status columns. There is no point in collecting a bunch of invalid data. If the domain of the Status column is very small, you could use a check constraint instead of a foreign key.


CODO ERGO SUM
Go to Top of Page

mike123
Master Smack Fu Yak Hacker

1462 Posts

Posted - 2008-12-29 : 12:38:29
Agree on all points! Will be making some changes here. :)

how many transactions per second do you think is a reasonable goal on lets say a quad core box / raid 5 / 4gb ram ? Is it too hard to say? just looking for a rough number.. not very experienced with load testing

thanks once again! very helpful
mike123
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2008-12-30 : 02:20:23
You can also speed up the sproc by not returning a resultset back to client.
Either use a parameter for @retValue, or use the default RETURN_VALUE parameter.
CREATE Procedure dbo.insert_ImageLog
(
@BannerID INT,
@UserID INT,
@DomainID INT = NULL,
@Status CHAR(1)
)
AS

SET NOCOUNT ON

INSERT dbo.ImageLog
(
BannerID,
UserID,
DomainID,
Status,
UsageDate
)
VALUES (
@BannerID,
@UserID,
NULLIF(@DomainID, 0),
@Status,
GETDATE()
)

IF @@ERROR <> 0
RETURN -1

IF @Status = 'S'
UPDATE u
SET u.BalanceCreditSize = u.BalanceCreditSize - b.BannerSize
FROM dbo.[User] AS u
INNER JOIN dbo.Banner AS b ON b.BannerID = @BannerID
WHERE u.UserID = @UserID

RETURN 0
GO



E 12°55'05.63"
N 56°04'39.26"
Go to Top of Page

mike123
Master Smack Fu Yak Hacker

1462 Posts

Posted - 2008-12-30 : 08:06:09
Hi Peso,

Great point. Will be making your suggested changes :)

Thanks,
Mike

Go to Top of Page

mike123
Master Smack Fu Yak Hacker

1462 Posts

Posted - 2009-01-08 : 07:42:10
quote:
Originally posted by Peso

You can also speed up the sproc by not returning a resultset back to client.
Either use a parameter for @retValue, or use the default RETURN_VALUE parameter.



Anyone of any speed tests or documents on this ? I have been googling but can't find any.

In this scenario its worth it for me to gain .01% in perfomance improvement, just want to see for other SP's what I have to gain by upgrading some old SP's.


Many thanks once again!
mike123
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2009-01-08 : 07:47:37
You should at least cut time by 10%.
Make sure your ASP code is set correctly and adExecuteNoRecords is set.
This means there is no overhead for ADODB to build mechanism for handling the returning resultset.
Parameters are already there when building the COMMAND object, right?


E 12°55'05.63"
N 56°04'39.26"
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2009-01-08 : 07:50:02
Also see http://support.microsoft.com/kb/285295



E 12°55'05.63"
N 56°04'39.26"
Go to Top of Page

mike123
Master Smack Fu Yak Hacker

1462 Posts

Posted - 2009-01-08 : 08:34:17
Hey Peso,

Wow 10%, thats alot more than I expected. Is this taking into account the ADO optimizations as well ? I am guessing it is.

I just looked it up and by declaring the command object objCommand.ExecuteNonQuery() we are achieving the same thing as "adExecuteNoRecords" which was the command in classic ASP. I am now using ASP.NET

Great tips! Much appreciated!

mike123
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2009-01-08 : 08:42:41
That is not entirely true.
ExecuteNonQuery returns number of rows affected by query
quote:
For UPDATE, INSERT, and DELETE statements, the return value is the number of rows affected by the command. For all other types of statements, the return value is -1. If a rollback occurs, the return value is also -1.

If you use ExecuteNonQuery, you must pass your desired values as parameters.
quote:
Although the ExecuteNonQuery returns no rows, any output parameters or return values mapped to parameters are populated with data.
So you will have to see of @RETURN_VALUE still is parameter 0, or if you have to supply a @retValue parameter to the procedure head.



E 12°55'05.63"
N 56°04'39.26"
Go to Top of Page

mike123
Master Smack Fu Yak Hacker

1462 Posts

Posted - 2009-01-08 : 09:21:43
Hey Peso,

Great, the more optimizations I can make the better :) I will look into if I turn off ADO.NET checking for the # of rows affected by the query. (I have used SET NOCOUNT ON) for the SQL end.

Passing my desired values as parameters is perfect and how I will proceed!

thanks again!
mike123
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2009-01-08 : 09:25:58
Great!

Keep us posted about future progress.


E 12°55'05.63"
N 56°04'39.26"
Go to Top of Page
   

- Advertisement -