Monday, February 20, 2012

Overhead with Tables

Hey I need to know the overhead associated with SQL Server tables?
And what the overhead is comprised of ,would also be helpful.
Pleasemay i inquire why you are asking? yes, SQL Server tables have some overhead, and if i told you it was only 7%, would you be happy?|||Perhaps he means in comparison with Oracle.

Oracle incurs a significant overhead in the creation of tables, which is why the use of temporary tables in Oracle is discourage. SQL Server incurs very little overhead in creating tables, and temporary tables become a valuable tool for efficient coding.|||Trying to determine the size of the database tables, so I can figure out which queries would be the slowest then create a physical schema.

So It my table is 1MB of records its size is 1.07MB with Overhead,
What would the Overhead be comprised of? Indexes or?|||Temporary Tables, table in memory right, How would I create one do I use a Join?|||Nate,

No offense, but what is the extent of your database experience? We'll do our best to help you out here, but you seem unsure of even what questions to ask. The ones you have posted so far have little relevance to the database design process.

Start with a logical model. What is the purpose of the database you are designing?|||Ive made a number of databases Paradox, MS Access, Oracle, MySQL, Interbase, I probably ask stupid questions cause I don't know the answer and I havn't had years of experience, Im a final year bachelor of Computing systems, though Im previously trained in Graphic Design,

Im not an expert but It seems logical to me to understand how much data is being dealt with in a database when producing a physical schema? and even though seven percent it seems like relevant information, but your welcome to convince me otherwise?

And if you mean database design process you would be meaning conceptual logical physical right? which would be the same process Im following except I ask questions because Ive looked for the answers and havn't found them.

Plus the question was about how SQL server deals with tables more so than the design of my physical schema.|||but where did the "overhead" question come from?

let me ask you this, if you were in the market to buy a computer, why would one of the first question you ask be how much heat the motherboard puts out? surely the fan can take care of it, yes?

it's the same with table "overhead" -- what does it matter?

by the way, i pulled the number 7% out of the air|||A motherboard doesn't take up HDD space?

If I have a table/s 1GB or expected at 1GB and there is an overhead of 5% then I actually have used 1.05 GB and if the DB grows even larger then I would need more extra space? And if I omit any associated overhead from Analysis, I would then be likely miss calculate how long it will take to need to increase DB size (Hosted Server) based on the growth of site records. (But is overhead consistent with the size of the table? or Fixed?).

The question came from when designing physical schema for Oracle where the calculations had to include overhead.|||Nate1 - you don't need to consider this sort of "overhead" for SQL Server. If you really care then the overhead is predominantly page header information, some in row information (for example the null state of columns, position & length of variable length columns, position of fixed length columns), array corrolating logical order with physical page order etc.. There are also a few bytes in other system pages. As far as indexes go - it all depends how you index.

I can assure you that none of the contributers here ever consider this when designing a SQL Server database (at either the physical or logical level).|||I wouldn't go so far as to say he should not consider overhead, but it is certainly way down the list of priorities. The first step must be a logical model, then the physical model, and then the hardware requirements (where overhead may come into play).|||The overhead per row will be fixed. However, there are going to be a number of things that can not be calculated. For example: Will your clustered index likely produce page splits? How many? Will there be updates to varchar fields that will induce page splits? What will their distribution be? What will the maximum amount of fragmentation be before you reindex? How about the non-clustered indexes, will the insert or update pattern induce page splits there? Are you planning on implementing table/index partitioning? Will there be user defined statistics associated with the table? How about text/image data? What is the maximum number of rows per page you will be able to fit in? Will data be deleted/purged (thus leaving empty gaps in the table)?

Short answer, multiply total number of bytes in the row by the number of rows, then multiply by three....unless you want to answer all of the above questions.|||I wonder how much overhead my engine in my care takes up...oh wait, if I don't have an engine....I guess that would cause other problems|||Exactly the point I've been trying to make. Whatever the overhead will be, you have to deal with it. The only other option is not to create your application.|||Exactly the point I've been trying to make. Whatever the overhead will be, you have to deal with it. The only other option is not to create your application.

So...why didn't you just say so?|||Exactly the point I've been trying to make. Whatever the overhead will be, you have to deal with it. exactly the point i was trying to make (motherboard heat? fan?) -- the dbms will take care of the overhead, you yourself won't have to deal with it|||Exactly the point I was trying to ma... oh wait a minute - no I was making another point.|||exactly the point i was trying to make (motherboard heat? fan?) -- the dbms will take care of the overhead, you yourself won't have to deal with it

I still like mine better

Maybe he could just use 1 table

No comments:

Post a Comment