2012年2月26日星期日

First post to SQL server

Hello!

I am programming with a SQL backend and have noticed something I can't seem to explain and it appears to be on the SQL backend and not my program. I have two tables which are very large (100K + records). When posting a change for the first time to each of them, the SQL server took 5-10 minutes to commit the change. After the change finally commits, I can add and edit without any problem or speed lag. It almost seemed like the sql server was doing something in the background, like building a list or gathering info it didn't have until the first time the table was posted to. Does anyone know why this type of situation would occur? I notice a bit of a slowdown in posting with other table the first time as well, but not as long of a time because they are smaller tables. Thanks!

RichardYes, you're right. SQLServer does something in background. It's called execution plan. In a few words SQLServer analyse your query and determine the fastest way to complete it. Second time it already has the exec plan cached so it simply use the plan.

You also have to pay attention to the fact that SQLServer write almost everything that it does (execept a few commands like bulk insert or bulk copy - see Books on Line, it might be interesting for you) to its logs, so if you insert a large amount of data on a row-by-row basis (instead of a batch) SQLServer will write a line in the log for every record that , so there's another bottleneck of your application.

ionut|||That explains it, thanks for your help!

Originally posted by ionut calin
Yes, you're right. SQLServer does something in background. It's called execution plan. In a few words SQLServer analyse your query and determine the fastest way to complete it. Second time it already has the exec plan cached so it simply use the plan.

You also have to pay attention to the fact that SQLServer write almost everything that it does (execept a few commands like bulk insert or bulk copy - see Books on Line, it might be interesting for you) to its logs, so if you insert a large amount of data on a row-by-row basis (instead of a batch) SQLServer will write a line in the log for every record that , so there's another bottleneck of your application.

ionut|||5-10 minutes still seems like an awfully long time to create an execution plan and log a transaction, unless he's updating all of the records or something.

Are you doing an insert or an update that first time? How many records are being affected? Does it take as long if you execute the same query in QueryAnalyzer instead of through your application?|||If we are talking about inserting (or updating for thet matter) 100K+ rows and if this is done on a row by row basis, I don't think 5-10 mins it's a long time. Of course in this discution hardware of the server plays an important role too.

ionut|||I believe that is how many rows are in the table rather than how many rows are being inserted/updated. Perhaps I'm reading it wrong, though.

It's true, though, that if it's not a bulk-insert and 100,000 rows are being inserted, depending on hardware 5-10 minutes is reasonable. I read it, however, to be inserting a single row into a 100k+ row table.

Perhaps Thread77 can clarify if there is still an issue.|||That is how many rows are in the table, there was only one row being inserted/edited at the time of the slowdown. Another factor though was the server we were using to test on. It had a low amount of memory, so when the larger tables were first posted to the memory spiked and also attributed to the slowdown. We ran the same test on another server and although there was a slowdown the first time, it wasn't nearly as bad.

Thanks for all your replies!

Originally posted by loach
I believe that is how many rows are in the table rather than how many rows are being inserted/updated. Perhaps I'm reading it wrong, though.

It's true, though, that if it's not a bulk-insert and 100,000 rows are being inserted, depending on hardware 5-10 minutes is reasonable. I read it, however, to be inserting a single row into a 100k+ row table.

Perhaps Thread77 can clarify if there is still an issue.

没有评论:

发表评论