Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
senthilkmd
Starting Member
4 Posts |
Posted - 2003-01-13 : 14:50:34
|
hi friends, I'm new to Developing, and somebody told me that if we remove primary key constraint (while deploying database in client side) in sqlserver table, it will be faster. Because it will search entire table (while inserting a row / Update a primary key column) and needs some more time, they told. Please assume we're having 2 Lakhs records.Help me out.with advance thankssenthil |
|
|
rihardh
Constraint Violating Yak Guru
307 Posts |
Posted - 2003-01-13 : 15:31:44
|
| You should read more books instead of listening to people talking bull..it! |
 |
|
|
AjarnMark
SQL Slashing Gunting Master
3246 Posts |
Posted - 2003-01-13 : 15:32:37
|
| Senthil, my experience, and the experience of others I've read here, suggests strongly that with SQL Server, the steps that theoretically should improve performance don't always show up in real-world results. So, overall, if you're really looking for performance improvements, the answer is test, test, test.With that said, here are a few other things to consider...1) If I remember correctly, a lakh is roughly 100,000, so 2 lakhs of records would be 200,000 records. While this number appears to be large at first glance, it is not anywhere near pushing the performance limits of SQL Server.2) Adding primary key constraint often involves the creation of an index, so any searching on the values during an insert/update should use that index and not a table scan, so performance should not be too abruptly hit.3) The PK constraint is there for data integrity purposes. If you remove it to improve performance during deployment, then you have either forfeited data integrity, or you'll need to re-create it once the records are in place, and then you'll consume time for that effort. Or if you want to do away with the PK constraint entirely, then you'll need to come up with some other (almost guaranteed LESS efficient) method to ensure data integrity.Speaking only from a theoretical point of view, I'd guess that your performance gain would be nearly zero. You'll only know for sure by testing on your specific system because there are so many things that can change the results from one system to another. But I really don't understand why this is a factor while "deploying" the database and what you mean by "in client side". If this is a new software installation, then why isn't it being handled by the copying of the .mdf files instead of a 200,000 row INSERT?------------------------------------------------------The more you know, the more you know you don't know. |
 |
|
|
|
|
|
|
|