Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Development (2000)
 Table Design

Author  Topic 

AskSQLTeam
Ask SQLTeam Question

0 Posts

Posted - 2002-01-18 : 10:47:25
Chad writes "I am designing a accounting database and am having trouble on designing my Journal Entry Table due to size and performance issues.

CASE #1
One one hand I can build a simple table with columns SITE(Store Number), DATE, GL_ACCOUNT (General Ledger Account), and AMOUNT. SITE, DATE and GL_ACCOUNT can be the primary key and AMOUNT is simply a double datatype. The GL_ACCOUNT field would be a foreign key to a GL_ACCOUNT table, and SITE is a foreign key to a SITE TABLE.

Everyday there may be 10-50 GL_ACCOUNTs entered into the Journal Entry Table. At 50 entries a day, that's 54,750 records every 3 years. If there are 50 stores each making the same number of journal entries, that's 2,737,500 records in 3 years!

One benefit is that the GL_ACCOUNT, SITE, and JOURNAL_ENTRY TableS would be normalized. Adding another GL_Account is simply adding another record in the GL_Account Table. But the biggest problem is performance and database size. I will need to query this Journal Entry Table everyday to give real-time reporting to our management.

CASE #2
I can build one Journal_Entry Table where SITE and DATE is the primary key and there are 10-50 columns, each representing a different GL_Account.

We now have 1 record per day, 54,750 records for 50 stores, for 3 years. Querying this information would be much quicker, but the data is not normalized, there may be null characters if there is not a entry for a GL_Account on a certain day, and if a new GL_Account is added, it needs to be added in the table design.

Which method would you recommend, or ways to maximize performance?

Thank you,

Chad

SQL Server 2000"

MichaelP
Jedi Yak

2489 Posts

Posted - 2002-01-18 : 11:17:46
I'd go for case #1. The "wide" table in my experience will cause you problems down the road with maintainability. Like you said, it's not normalized, there will be many nulls etc.

2.7 Million records in a table sounds like a lot, but it's not too bad. I know of certain applications that enter 1.4 million records a WEEK into a table. Maybe you will need to come up with a way to offload each year's data to another server, keeping yoru production database small and fast, but keeping all of your data for reporting etc.

Maybe you can build a test DB filled with data to run test reports against to see how long it takes. Bench several different data load sizes (6 mos of data, 1yr 2yrs, etc). Come up with some numbers that you can present to your superiors. When you do that, it's much easier to get addtional hardware etc approved as needed. They have a timeline that says "We need X hardware to report on Y months of data in a reasonable amount of time."

Michael

Go to Top of Page
   

- Advertisement -