Author |
Topic |
sapator
Constraint Violating Yak Guru
462 Posts |
Posted - 2010-01-26 : 19:48:33
|
Hi.If have a question about inserting and removing rows.When i have a table that has identity set i get an auto increment.So if i remove a row that has, say an id of 2 and then add a row, then the increment will have the id set to 3, or whatever number is available next, so my question is: If i do that kind of add-remove all the time and i get to an extremely high id number(the last int or bigint available), will SQL crash or will it fill id's that are not already taken?If i have a problem, should i not set auto increment(so if there is a problem i guess i have to visit T-SQL forum cuz it's a complex(for me) query) ? |
|
russell
Pyro-ma-ni-yak
5072 Posts |
Posted - 2010-01-26 : 20:16:09
|
you're going to have to do that more than 2 billion times before you run out of integers (int) and 9,223,372,036,854,775,807 times b4 you run out of bigints.i wouldn't worry at all about it.but...to reseed the table use DBCC CHECKIDENT (tableName, RESEED, newSeedValue)http://msdn.microsoft.com/en-us/library/ms176057.aspx |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-27 : 04:26:25
|
If you are deleting from the bottom (like a log table, where you clear out the "old stuff") then all the small numbers will be available for re-use once you hit, or get near to, the limit.However, SQL will NOT skip existing values, so if you are deleting at random, and some values will be kept, then that strategy won't work.But as Russell says you have 2 billion inserts before it becomes a problem, and then you just change the table to use a BIGINT instead of a normal INT and get another 9 quintillion inserts ... |
 |
|
sapator
Constraint Violating Yak Guru
462 Posts |
Posted - 2010-01-27 : 19:58:34
|
Yes in the present it won't be a problem but you never know(i'm hopping for google size acclaim .The deletion is at random without a pattern btw.Thanks. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-28 : 03:04:37
|
Fix it when you have a problem. I'm sure when you get Google-sized investors they will spend to fix the problem!Upgrading INT to BIGINT is not hard. Using BIGINT from the get-go requires twice as much storage, and half as many entries per index page, so hurts performance. No need until you get to quadrillion insert levels !!Actually ... once you get to Google size I don't think you'll be in SQL Server any more |
 |
|
sapator
Constraint Violating Yak Guru
462 Posts |
Posted - 2010-01-28 : 19:52:04
|
Well i don't know about google but ING here uses SQL If i drop the int column and re-create with a bigint will i get problems?Also if my identity column has a fk relationship, if i remove the relationship and drop and re-create, how would i be able to hold each created number or my identity column on the new identity column.Thanks.Edit, let me clarify.If i have some numbers on the identity now, p.e. 1-2-6-8 i want these numbers to go to the exact same rows they were before. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-29 : 03:00:37
|
"If i drop the int column and re-create with a bigint will i get problems?"You'll need to ALTER it from INT to BIGINT. DROPping it will lose all the ID numbers, which would not be a good idea!You would also have to ALTER any columns in other tables that hold the IDNothing very complicated. If you make all the changes using Table Designer [i.e. the MSSQL GUI tool] SQL will either give you a script to do it (including taking care of all FKs) or just run the script for you.I'll reiterate that I would not use BIGINT now unless you are certain that the numbers will run out inside of, say, 4 - 5 years.You'll have much MUCH bigger problems optimising your queries, and providing the right sort of hardware, and failover systems, if you have table(s) with 2 billion rows.If you are contemplating 2 billions rows in even 10 years you ought to be putting hugely costly effort into optimising everything now. The INT to BIGINT upgrade will be a walk in the park by comparison.I recommend you just park this issue. Use INTs for now. |
 |
|
sapator
Constraint Violating Yak Guru
462 Posts |
Posted - 2010-01-29 : 11:12:27
|
Ok.Thanks for the suggestions.I don't really know how this project will turn out because i don't know how many users will eventually use it.I'll try to have integers wherever i see that the rows won't go to billions.Also thanks for MSSQL GUI tool i will try to find it but at this point i can just remove the rows and change the table design.One last thing.Because it's in Greek language, i should always use nvarchar?Is nvarchar adding size in comparison to varchar?Any alternatives? Thanks. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-29 : 12:14:56
|
"Is nvarchar adding size in comparison to varchar?"Yes, twice as many bytes to store a string, half as many index entries per page Dunno about Greek - can you get away with VARCHAR and a suitable CODEPAGE / COLLATION? The fact that it is a non-roman character set (or am I thinking of Ancient Greek only?) suggests to me that won't work.SQL 2008 Enterprise has Compression for Nvarchar ... that's another alternative. |
 |
|
sapator
Constraint Violating Yak Guru
462 Posts |
Posted - 2010-01-29 : 20:41:34
|
Starting to make bigints into ints. The Collation is set to Greek, the problem is i remember lot of problems with varchar and Visual Studio.I don't wanna risk it if i'm not sure.P.S. Ancient and modern Greek both use non roman symbols. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-30 : 02:19:40
|
"Ancient and modern Greek both use non roman symbols."Thanks for that.I don't have ant knowledge of non-Roman character sets in SQL, but my assumption is that you HAVE to use Nvarchar.What annoys me more is when I see applications with Nvarchar for everything because no one thought about it. But in those places all columns are Nvarchar(50) because that is the default in the table-design tool |
 |
|
sapator
Constraint Violating Yak Guru
462 Posts |
Posted - 2010-01-30 : 13:14:09
|
Well at the Nvarchar(50) i always try to see how much space i really need.If i have a P.O. box or a person identity i will reduce the number.I only use nvarchar(max) on the "comments" textbox.I did not however thought that there is so much difference in performance on int and bigint so i thank you for that.P.S. There is a possibility of the same db for English speaking users so i know now that i must Alter the nvarchar to varchar, but the database will be empty so it will be safe. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-30 : 13:30:38
|
Indeed. On the one had there is nothing wrong with VARCHAR(50) - only the number of characters entered will actually be stored. On the other hand the designer did not give any thought to the actual width of the data - varchar(50) will be too narrow for some EMail addresses for example |
 |
|
sapator
Constraint Violating Yak Guru
462 Posts |
Posted - 2010-01-30 : 17:57:26
|
Yes, i hear you.Thanks again |
 |
|
|