Author |
Topic |
lostpacket
Starting Member
7 Posts |
Posted - 2011-04-16 : 21:56:27
|
We have 3 databases kept on D drive. There is full backup every night to a hard disk on the network.The customer would like to keep data for last 1 year. Now in these databases , there are tables which have timestamps and values ( of plant data) but some tables are like look up tables (e.g. product , product id , description).For the tables with timestamps, we would like to only keep last 1 year's data and delete the rest of the rows in that table on d drive. But at the same time, the data should be fully backed up. Its just we don't want too much of space to be taken on D drive.I thought of using file groups so large tables can be kept on different hard drive but customer didn't prefer it.What can be done here ? |
|
visakh16
Very Important crosS Applying yaK Herder
52326 Posts |
Posted - 2011-04-17 : 02:59:40
|
what you can do is write a set of proc which does this purging job for you based on timestamp. Make sure you take backup before these jobs. You can schedule them in sql agent with dependency set on backup jobs so that only after they finish purging happens.------------------------------------------------------------------------------------------------------SQL Server MVPhttp://visakhm.blogspot.com/ |
 |
|
Jahanzaib
Posting Yak Master
115 Posts |
Posted - 2011-04-17 : 08:34:39
|
First of all-Delete Data(what is the data count in a day)-Rebuild in a week and shrink for the space , otherwise space issue and performance issue will be occur-If this system is for 24/7 then you have to delete data 1000,10000,100000 rows in one go this figure its depend on the count of a tableExample=======If total count in table is 100000 then divide / 2 is 50,000 rows in first go and then so on asDECLARE @TOP intDELETE top(@TOP) from tab1 where timestamp < getdate() - 365this statement will delete 50,000 rows in 1 go then next 50,000 will be delet in a second oneyou dont need last year data so there is no need to write transaction as BEGIN TRANSACTIONRegards,Syed Jahanzaib Bin HassanMCTS,MCITP,OCA,OCP,OCE,SCJP,IBMCDBAMy Blogwww.aureus-salah.com |
 |
|
lostpacket
Starting Member
7 Posts |
Posted - 2011-04-17 : 10:21:46
|
Ok. If the data is deleted from the table, how to perform a back up.For e.g. I deleted all the rows older than 1 year. I have this table on D drive.How am I going to have a full undeleted table on network drive at the same time ? |
 |
|
sunitabeck
Master Smack Fu Yak Hacker
5155 Posts |
Posted - 2011-04-17 : 11:11:20
|
I guess a lot of how you design this would depend on the number or rows involved, how frequently you need to archive, how frequently you need to access the data etc. One approach to doing this would be as follows:1. Create an archive table with the same schema as the production table, but put it in another file group on another drive.2. At regular intervals - once a day? once an hour? or whatever the business requires - delete from the production table and insert it into the archive table. 3. When you want to access the data query the archive table or production table or both as required by the date range that you need to query for.Be sure to do the deletion from production table and insertion into archive table in a transaction. Since you are on SQL 2008 (or even 2005) you could use a single statement using the output clause as shown in the example below.-- CREATE Test Data. CREATE TABLE Prod(id INT, date DATETIME, DATA VARCHAR(1));CREATE TABLE Archive(id INT, date DATETIME, DATA VARCHAR(1)); -- on a different file group.INSERT INTO Prod VALUES (1,'20100416','a'),(2,'20100417','b'),(3,'20100418','c');-- Move everything older than a year into the archive table.DELETE FROM ProdOUTPUT DELETED.* INTO ArchiveWHERE date <= '20100417';-- Check the results.SELECT * FROM Prod;SELECT * FROM Archive;-- Clean upDROP TABLE Prod;DROP TABLE Archive; Edit: I just noticed your client's requirement that they don't want to use a different file group. So all my typing was in vain!! But I still think it is a viable approach - but the customer is always right, so I will say nothing more. |
 |
|
lostpacket
Starting Member
7 Posts |
Posted - 2011-04-19 : 22:09:51
|
Another option that customer gave was he could make us available one more copy of the SQL server in the same network.Let me call :Original Server: S1Second server : S2S1 and S2 will be on the same network. Initially S1 can be fully restored into S2.During the day time few more rows will be added to tables in S1.At night perform the delete operation in S1 and now append this table with table in S2. So S2 will get updates that happened during day.After this append is finished , perform a full back up of S2 on the network drive.How does this seem to you guys ? I think appending could help.Can we append two tables on different sql servers.Thanks a lot. |
 |
|
lostpacket
Starting Member
7 Posts |
Posted - 2011-04-19 : 22:11:43
|
Forgot to mention this otherwise it'll confuse you.The customer doesn't want the users to see data older than a year.So even if we use file groups , users will still be able to see the data older than a year. Hence the problem. |
 |
|
sunitabeck
Master Smack Fu Yak Hacker
5155 Posts |
Posted - 2011-04-20 : 07:36:50
|
You could certainly use two servers and run queries that use tables from both servers, for example, by linking the servers. There is documentation and examples here: http://msdn.microsoft.com/en-us/library/aa259589(v=sql.80).aspxBut, if the only goals are (a) the client should not have access to the archive data and (b) the archive data should be on a different drive, then you can accomplish both goals even using one server. Use separate databases for archive and production data and remove permissions for the archive database. You could even put both tables in the same database and deny permissions on the archive table.For any of the three approaches - separate server, separate database or separate table - the query that I had posted on 04/17/2011 : 11:11:20 should work. You will need to use the 4-part or 3-part naming convention to access tables across servers or across databases, respectively. |
 |
|
lostpacket
Starting Member
7 Posts |
Posted - 2011-04-20 : 09:59:14
|
Thank you Sunita.I think this is what i was looking for. |
 |
|
|