Saturday, February 25, 2012
Reducing Spam Associated with Posting to Newsgroups
To help avoid receiving unwanted messages (spam) in your regular e-mail account, you may not want to include your regular e-mail address when you post a question or reply to a post in a discussion group. Instead you may want to do one of the following:
* Use a modified e-mail address: Use a different version of your e-mail address that others will understand, but that spam tools can't automatically pick up. For example, if your actual e-mail address is "emailname@.account.com", consider using a modified e-mail address such as: "emailnameaccount.com.invalid", emailname@.REMOVE_CAPS_AND_INVALIDaccount.com.invalid, or emailname@.account.com.NO_SPAM. In this case, the spam tools will send mail to an invalid e-mail address, and others will know to exclude the "(removethis)" when they send you e-mail. When you post a question or reply to a discussion group, just enter your modified e-mail address in the appropriate box.
* Use a secondary e-mail account: Set up or use an e-mail account, such as a Hotmail account, that is separate from your primary account for posting to discussion groups. When you post a question or reply to a discussion group, use your Hotmail account as your e-mail address.
If you have feedback or questions about this, please post a reply in the newsgroup, or contact us at http://register.microsoft.com/contactus30/contactus.asp?domain=communities
Thanks!
Microsoft Communities Teammy mailbox has been flooded with spam now.
Rather than changing email address i think
the right solution should be developing spam filters
which will stop it.
I think yahoo has much beter filter than MS.
>--Original Message--
>Due to a recent increase in spam sent to posters in
newsgroups, Microsoft advises that newsgroup participants
should consider avoiding posting to newsgroups using their
real email address. Microsoft is also committed to
continuing to address the issue of spam from a
technological perspective.
>To help avoid receiving unwanted messages (spam) in your
regular e-mail account, you may not want to include your
regular e-mail address when you post a question or reply
to a post in a discussion group. Instead you may want to
do one of the following:
>* Use a modified e-mail address: Use a different version
of your e-mail address that others will understand, but
that spam tools can't automatically pick up. For example,
if your actual e-mail address is "emailname@.account.com",
consider using a modified e-mail address such
as: "emailnameaccount.com.invalid",
emailname@.REMOVE_CAPS_AND_INVALIDaccount.com.invalid, or
emailname@.account.com.NO_SPAM. In this case, the spam
tools will send mail to an invalid e-mail address, and
others will know to exclude the "(removethis)" when they
send you e-mail. When you post a question or reply to a
discussion group, just enter your modified e-mail address
in the appropriate box.
>* Use a secondary e-mail account: Set up or use an e-mail
account, such as a Hotmail account, that is separate from
your primary account for posting to discussion groups.
When you post a question or reply to a discussion group,
use your Hotmail account as your e-mail address.
>If you have feedback or questions about this, please post
a reply in the newsgroup, or contact us at
http://register.microsoft.com/contactus30/contactus.asp?
domain=communities
>Thanks!
>Microsoft Communities Team
>.
>|||But where should those spam filters exist? If your inbox is being spammed,
then it's because your email address was harvested. If you post your email
address in a publicly accessible forum, such as an NNTP Server like
msnews.microsoft.com, it's going to get harvested and the spam filter needs
to live on your inbox. If the problem is actually that you're seeing too
much spam in these newsgroups, well, we're working on that.
Mind you, I understand your frustration and I'm getting a ton of it too,
but this isn't specific to msnews.microsoft.com or the Microsoft-hosted
newsgroups and the obvious workaround is simply to not give the spammers
any ammunition. :-)
--
Sincerely,
Stephen Dybing
This posting is provided "AS IS" with no warranties, and confers no rights.
Please reply to the newsgroups only, thanks.
| From: "Samir" <samir@.yahoo.com>
|
| my mailbox has been flooded with spam now.
|
| Rather than changing email address i think
| the right solution should be developing spam filters
| which will stop it.
|
|
| I think yahoo has much beter filter than MS.
|
|
|
|
|
| >--Original Message--
| >Due to a recent increase in spam sent to posters in
| newsgroups, Microsoft advises that newsgroup participants
| should consider avoiding posting to newsgroups using their
| real email address. Microsoft is also committed to
| continuing to address the issue of spam from a
| technological perspective.
| >
| >To help avoid receiving unwanted messages (spam) in your
| regular e-mail account, you may not want to include your
| regular e-mail address when you post a question or reply
| to a post in a discussion group. Instead you may want to
| do one of the following:
| >
| >* Use a modified e-mail address: Use a different version
| of your e-mail address that others will understand, but
| that spam tools can't automatically pick up. For example,
| if your actual e-mail address is "emailname@.account.com",
| consider using a modified e-mail address such
| as: "emailnameaccount.com.invalid",
| emailname@.REMOVE_CAPS_AND_INVALIDaccount.com.invalid, or
| emailname@.account.com.NO_SPAM. In this case, the spam
| tools will send mail to an invalid e-mail address, and
| others will know to exclude the "(removethis)" when they
| send you e-mail. When you post a question or reply to a
| discussion group, just enter your modified e-mail address
| in the appropriate box.
| >
| >* Use a secondary e-mail account: Set up or use an e-mail
| account, such as a Hotmail account, that is separate from
| your primary account for posting to discussion groups.
| When you post a question or reply to a discussion group,
| use your Hotmail account as your e-mail address.
| >
| >If you have feedback or questions about this, please post
| a reply in the newsgroup, or contact us at
| http://register.microsoft.com/contactus30/contactus.asp?
| domain=communities
| >
| >Thanks!
| >Microsoft Communities Team
| >
| >.
| >
|
Reducing Size of MSDB
runs in MSDB? We are running into larger and larger MSDB sizes and want to
control its growth.
Larry Menzin
American Techsystems Corp.
Agent does this for you, if you refer to job history. It is configurable, on one of the tabs in EM,
for agent, properties. What possibly is causing this isn't job history, but backup history. Check
out sp_delete_backuphistory.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:A840316B-9B06-4884-9CE6-25715FDAAD69@.microsoft.com...
> Are there any scripts that can be used to archive/remove older SQL agent job
> runs in MSDB? We are running into larger and larger MSDB sizes and want to
> control its growth.
> --
> Larry Menzin
> American Techsystems Corp.
|||Just to add to Tibor's answer: If you have a lot of DTS packages, and if
each DTS package has several versions saved, then your MSDB will be huge.
You can right click on a DTS package in Enterprise Manager and select
'Versions' from the popup menu to see how many versions that package has
got, and delete the older versions if you don't need them anymore.
HTH,
Vyas, MVP (SQL Server)
SQL Server Articles and Code Samples @. http://vyaskn.tripod.com/
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:A840316B-9B06-4884-9CE6-25715FDAAD69@.microsoft.com...
Are there any scripts that can be used to archive/remove older SQL agent job
runs in MSDB? We are running into larger and larger MSDB sizes and want to
control its growth.
Larry Menzin
American Techsystems Corp.
|||The only safe way I know of is to manually delete them. You can however set
jobs to delete after they complete. When you go to the notifications tab of
a scheduled job, you can schedule Automatically Delete Job to have it be
deleted when it completes.
If you are scripting the creation of a job, set the delete_level parameter
of sp_addjob to 1.
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:A840316B-9B06-4884-9CE6-25715FDAAD69@.microsoft.com...
> Are there any scripts that can be used to archive/remove older SQL agent
> job
> runs in MSDB? We are running into larger and larger MSDB sizes and want to
> control its growth.
> --
> Larry Menzin
> American Techsystems Corp.
Reducing Size of MSDB
runs in MSDB? We are running into larger and larger MSDB sizes and want to
control its growth.
--
Larry Menzin
American Techsystems Corp.Agent does this for you, if you refer to job history. It is configurable, on one of the tabs in EM,
for agent, properties. What possibly is causing this isn't job history, but backup history. Check
out sp_delete_backuphistory.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:A840316B-9B06-4884-9CE6-25715FDAAD69@.microsoft.com...
> Are there any scripts that can be used to archive/remove older SQL agent job
> runs in MSDB? We are running into larger and larger MSDB sizes and want to
> control its growth.
> --
> Larry Menzin
> American Techsystems Corp.|||Just to add to Tibor's answer: If you have a lot of DTS packages, and if
each DTS package has several versions saved, then your MSDB will be huge.
You can right click on a DTS package in Enterprise Manager and select
'Versions' from the popup menu to see how many versions that package has
got, and delete the older versions if you don't need them anymore.
--
HTH,
Vyas, MVP (SQL Server)
SQL Server Articles and Code Samples @. http://vyaskn.tripod.com/
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:A840316B-9B06-4884-9CE6-25715FDAAD69@.microsoft.com...
Are there any scripts that can be used to archive/remove older SQL agent job
runs in MSDB? We are running into larger and larger MSDB sizes and want to
control its growth.
--
Larry Menzin
American Techsystems Corp.|||The only safe way I know of is to manually delete them. You can however set
jobs to delete after they complete. When you go to the notifications tab of
a scheduled job, you can schedule Automatically Delete Job to have it be
deleted when it completes.
If you are scripting the creation of a job, set the delete_level parameter
of sp_addjob to 1.
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:A840316B-9B06-4884-9CE6-25715FDAAD69@.microsoft.com...
> Are there any scripts that can be used to archive/remove older SQL agent
> job
> runs in MSDB? We are running into larger and larger MSDB sizes and want to
> control its growth.
> --
> Larry Menzin
> American Techsystems Corp.
Reducing Size of MSDB
runs in MSDB? We are running into larger and larger MSDB sizes and want to
control its growth.
Larry Menzin
American Techsystems Corp.Agent does this for you, if you refer to job history. It is configurable, on
one of the tabs in EM,
for agent, properties. What possibly is causing this isn't job history, but
backup history. Check
out sp_delete_backuphistory.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:A840316B-9B06-4884-9CE6-25715FDAAD69@.microsoft.com...
> Are there any scripts that can be used to archive/remove older SQL agent j
ob
> runs in MSDB? We are running into larger and larger MSDB sizes and want to
> control its growth.
> --
> Larry Menzin
> American Techsystems Corp.|||Just to add to Tibor's answer: If you have a lot of DTS packages, and if
each DTS package has several versions saved, then your MSDB will be huge.
You can right click on a DTS package in Enterprise Manager and select
'Versions' from the popup menu to see how many versions that package has
got, and delete the older versions if you don't need them anymore.
--
HTH,
Vyas, MVP (SQL Server)
SQL Server Articles and Code Samples @. http://vyaskn.tripod.com/
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:A840316B-9B06-4884-9CE6-25715FDAAD69@.microsoft.com...
Are there any scripts that can be used to archive/remove older SQL agent job
runs in MSDB? We are running into larger and larger MSDB sizes and want to
control its growth.
Larry Menzin
American Techsystems Corp.|||The only safe way I know of is to manually delete them. You can however set
jobs to delete after they complete. When you go to the notifications tab of
a scheduled job, you can schedule Automatically Delete Job to have it be
deleted when it completes.
If you are scripting the creation of a job, set the delete_level parameter
of sp_addjob to 1.
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:A840316B-9B06-4884-9CE6-25715FDAAD69@.microsoft.com...
> Are there any scripts that can be used to archive/remove older SQL agent
> job
> runs in MSDB? We are running into larger and larger MSDB sizes and want to
> control its growth.
> --
> Larry Menzin
> American Techsystems Corp.
reducing size of log file
I have SQL 2000 running on W2000.
Some of my data has huge log files - eg 5Gb log for a DB of 50Mb.
Under properties, it reports that it is 95% space.
How can I reduce the size of these?
I have tried the shrink option bu it doesn't help.
Many thanks
NEIL
You need to backup log before shrinking it
BACKUP LOG database_name WITH TRUNCATE_ONLY
On Jan 25, 1:05 pm, "Neil Jarman" <n...@.tNOiSPAMvPLEASEy.co.uk> wrote:
> Hi,
> I have SQL 2000 running on W2000.
> Some of my data has huge log files - eg 5Gb log for a DB of 50Mb.
> Under properties, it reports that it is 95% space.
> How can I reduce the size of these?
> I have tried the shrink option bu it doesn't help.
> Many thanks
> NEIL
|||Neil
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epa9v6$oo8$1$8300dec7@.news.demon.co.uk...
> Hi,
> I have SQL 2000 running on W2000.
> Some of my data has huge log files - eg 5Gb log for a DB of 50Mb.
> Under properties, it reports that it is 95% space.
> How can I reduce the size of these?
> I have tried the shrink option bu it doesn't help.
> Many thanks
> NEIL
>
|||Thanks for both your replies.
I have bvacked up log - using EM Backup Database... and this has reduced the
actual used size even miore, however it still occupies 5Gb of disk space.
I can't seem to use the DBCC commands described in the article supplied, can
you tell me how to do this? If I type DBCC followed by anything, I get dbcc
is not recognised ...
cheers,
NEIL
|||Hello,
Backup Log will clear up all the inactive postion of the log but will not
reduce the physical file. To reduce the physical
file you may need to SHRINK the LDF file using DBCC SHRINKFILE command. Take
a look into books online about
DBCC SHRINKFILE.
One more thing is schedule a frequent transaction log backup. This will
ensure that your LDF is not growing heavily as well as
make sure you have backup files to recover the database when required.
Thanks
Hari
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
> Thanks for both your replies.
> I have bvacked up log - using EM Backup Database... and this has reduced
> the actual used size even miore, however it still occupies 5Gb of disk
> space.
> I can't seem to use the DBCC commands described in the article supplied,
> can you tell me how to do this? If I type DBCC followed by anything, I get
> dbcc is not recognised ...
> cheers,
> NEIL
>
|||Hi Tibor,
I used a CMD screen to do this - I presumed it was a DOS type command.
NEIL
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:eWgPBBKQHHA.2140@.TK2MSFTNGP03.phx.gbl...
> What application are you using to execute the DBCC command?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
> news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
>
|||Hi Tibor,
Many thanks for the help - my log file is massively reduced now.
regards,
NEIL
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%23VKhZhKQHHA.3944@.TK2MSFTNGP06.phx.gbl...
> Nope, it is a TSQL command. Use Query Analyzer/SQL Server Management
Studio.[vbcol=seagreen]
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
> news:epaph8$f6s$1$8300dec7@.news.demon.co.uk...
in message[vbcol=seagreen]
reduced the actual used size[vbcol=seagreen]
supplied, can you tell me how to[vbcol=seagreen]
recognised ...
>
reducing size of log file
I have SQL 2000 running on W2000.
Some of my data has huge log files - eg 5Gb log for a DB of 50Mb.
Under properties, it reports that it is 95% space.
How can I reduce the size of these?
I have tried the shrink option bu it doesn't help.
Many thanks
NEILYou need to backup log before shrinking it
BACKUP LOG database_name WITH TRUNCATE_ONLY
On Jan 25, 1:05 pm, "Neil Jarman" <n...@.tNOiSPAMvPLEASEy.co.uk> wrote:
> Hi,
> I have SQL 2000 running on W2000.
> Some of my data has huge log files - eg 5Gb log for a DB of 50Mb.
> Under properties, it reports that it is 95% space.
> How can I reduce the size of these?
> I have tried the shrink option bu it doesn't help.
> Many thanks
> NEIL|||Neil
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epa9v6$oo8$1$8300dec7@.news.demon.co.uk...
> Hi,
> I have SQL 2000 running on W2000.
> Some of my data has huge log files - eg 5Gb log for a DB of 50Mb.
> Under properties, it reports that it is 95% space.
> How can I reduce the size of these?
> I have tried the shrink option bu it doesn't help.
> Many thanks
> NEIL
>|||Thanks for both your replies.
I have bvacked up log - using EM Backup Database... and this has reduced the
actual used size even miore, however it still occupies 5Gb of disk space.
I can't seem to use the DBCC commands described in the article supplied, can
you tell me how to do this? If I type DBCC followed by anything, I get dbcc
is not recognised ...
cheers,
NEIL|||Hello,
Backup Log will clear up all the inactive postion of the log but will not
reduce the physical file. To reduce the physical
file you may need to SHRINK the LDF file using DBCC SHRINKFILE command. Take
a look into books online about
DBCC SHRINKFILE.
One more thing is schedule a frequent transaction log backup. This will
ensure that your LDF is not growing heavily as well as
make sure you have backup files to recover the database when required.
Thanks
Hari
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
> Thanks for both your replies.
> I have bvacked up log - using EM Backup Database... and this has reduced
> the actual used size even miore, however it still occupies 5Gb of disk
> space.
> I can't seem to use the DBCC commands described in the article supplied,
> can you tell me how to do this? If I type DBCC followed by anything, I get
> dbcc is not recognised ...
> cheers,
> NEIL
>|||> If I type DBCC followed by anything, I get dbcc is not recognised ...
What application are you using to execute the DBCC command?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
> Thanks for both your replies.
> I have bvacked up log - using EM Backup Database... and this has reduced t
he actual used size even
> miore, however it still occupies 5Gb of disk space.
> I can't seem to use the DBCC commands described in the article supplied, c
an you tell me how to do
> this? If I type DBCC followed by anything, I get dbcc is not recognised ..
.
> cheers,
> NEIL
>|||Hi Tibor,
I used a CMD screen to do this - I presumed it was a DOS type command.
NEIL
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:eWgPBBKQHHA.2140@.TK2MSFTNGP03.phx.gbl...
> What application are you using to execute the DBCC command?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
> news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
>|||Nope, it is a TSQL command. Use Query Analyzer/SQL Server Management Studio.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epaph8$f6s$1$8300dec7@.news.demon.co.uk...
> Hi Tibor,
> I used a CMD screen to do this - I presumed it was a DOS type command.
> NEIL
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote i
n message
> news:eWgPBBKQHHA.2140@.TK2MSFTNGP03.phx.gbl...
>|||Lines: 68
NNTP-Posting-Host: nbtdance.demon.co.uk
X-Trace: news.demon.co.uk 1170082069 7290 80.177.3.126 (29 Jan 2007 14:47:49
GMT)
X-Complaints-To: abuse@.demon.net
NNTP-Posting-Date: Mon, 29 Jan 2007 14:47:49 +0000 (UTC)
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1807
X-Priority: 3
X-Newsreader: Microsoft Outlook Express 6.00.2800.1807
X-MSMail-Priority: Normal
Xref: leafnode.mcse.ms microsoft.public.sqlserver.server:25631
Hi Tibor,
Many thanks for the help - my log file is massively reduced now.
regards,
NEIL
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%23VKhZhKQHHA.3944@.TK2MSFTNGP06.phx.gbl...
> Nope, it is a TSQL command. Use Query Analyzer/SQL Server Management
Studio.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
> news:epaph8$f6s$1$8300dec7@.news.demon.co.uk...
in message[vbcol=seagreen]
reduced the actual used size[vbcol=seagreen]
supplied, can you tell me how to[vbcol=seagreen]
recognised ...[vbcol=seagreen]
>
reducing size of log file
I have SQL 2000 running on W2000.
Some of my data has huge log files - eg 5Gb log for a DB of 50Mb.
Under properties, it reports that it is 95% space.
How can I reduce the size of these?
I have tried the shrink option bu it doesn't help.
Many thanks
NEILYou need to backup log before shrinking it
BACKUP LOG database_name WITH TRUNCATE_ONLY
On Jan 25, 1:05 pm, "Neil Jarman" <n...@.tNOiSPAMvPLEASEy.co.uk> wrote:
> Hi,
> I have SQL 2000 running on W2000.
> Some of my data has huge log files - eg 5Gb log for a DB of 50Mb.
> Under properties, it reports that it is 95% space.
> How can I reduce the size of these?
> I have tried the shrink option bu it doesn't help.
> Many thanks
> NEIL|||Neil
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epa9v6$oo8$1$8300dec7@.news.demon.co.uk...
> Hi,
> I have SQL 2000 running on W2000.
> Some of my data has huge log files - eg 5Gb log for a DB of 50Mb.
> Under properties, it reports that it is 95% space.
> How can I reduce the size of these?
> I have tried the shrink option bu it doesn't help.
> Many thanks
> NEIL
>|||Thanks for both your replies.
I have bvacked up log - using EM Backup Database... and this has reduced the
actual used size even miore, however it still occupies 5Gb of disk space.
I can't seem to use the DBCC commands described in the article supplied, can
you tell me how to do this? If I type DBCC followed by anything, I get dbcc
is not recognised ...
cheers,
NEIL|||Hello,
Backup Log will clear up all the inactive postion of the log but will not
reduce the physical file. To reduce the physical
file you may need to SHRINK the LDF file using DBCC SHRINKFILE command. Take
a look into books online about
DBCC SHRINKFILE.
One more thing is schedule a frequent transaction log backup. This will
ensure that your LDF is not growing heavily as well as
make sure you have backup files to recover the database when required.
Thanks
Hari
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
> Thanks for both your replies.
> I have bvacked up log - using EM Backup Database... and this has reduced
> the actual used size even miore, however it still occupies 5Gb of disk
> space.
> I can't seem to use the DBCC commands described in the article supplied,
> can you tell me how to do this? If I type DBCC followed by anything, I get
> dbcc is not recognised ...
> cheers,
> NEIL
>|||> If I type DBCC followed by anything, I get dbcc is not recognised ...
What application are you using to execute the DBCC command?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
> Thanks for both your replies.
> I have bvacked up log - using EM Backup Database... and this has reduced the actual used size even
> miore, however it still occupies 5Gb of disk space.
> I can't seem to use the DBCC commands described in the article supplied, can you tell me how to do
> this? If I type DBCC followed by anything, I get dbcc is not recognised ...
> cheers,
> NEIL
>|||Hi Tibor,
I used a CMD screen to do this - I presumed it was a DOS type command.
NEIL
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:eWgPBBKQHHA.2140@.TK2MSFTNGP03.phx.gbl...
>> If I type DBCC followed by anything, I get dbcc is not recognised ...
> What application are you using to execute the DBCC command?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
> news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
>> Thanks for both your replies.
>> I have bvacked up log - using EM Backup Database... and this has reduced
>> the actual used size even miore, however it still occupies 5Gb of disk
>> space.
>> I can't seem to use the DBCC commands described in the article supplied,
>> can you tell me how to do this? If I type DBCC followed by anything, I
>> get dbcc is not recognised ...
>> cheers,
>> NEIL
>|||Nope, it is a TSQL command. Use Query Analyzer/SQL Server Management Studio.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
news:epaph8$f6s$1$8300dec7@.news.demon.co.uk...
> Hi Tibor,
> I used a CMD screen to do this - I presumed it was a DOS type command.
> NEIL
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message
> news:eWgPBBKQHHA.2140@.TK2MSFTNGP03.phx.gbl...
>> If I type DBCC followed by anything, I get dbcc is not recognised ...
>> What application are you using to execute the DBCC command?
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
>> news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
>> Thanks for both your replies.
>> I have bvacked up log - using EM Backup Database... and this has reduced the actual used size
>> even miore, however it still occupies 5Gb of disk space.
>> I can't seem to use the DBCC commands described in the article supplied, can you tell me how to
>> do this? If I type DBCC followed by anything, I get dbcc is not recognised ...
>> cheers,
>> NEIL
>>
>|||Hi Tibor,
Many thanks for the help - my log file is massively reduced now.
regards,
NEIL
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%23VKhZhKQHHA.3944@.TK2MSFTNGP06.phx.gbl...
> Nope, it is a TSQL command. Use Query Analyzer/SQL Server Management
Studio.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
> news:epaph8$f6s$1$8300dec7@.news.demon.co.uk...
> > Hi Tibor,
> >
> > I used a CMD screen to do this - I presumed it was a DOS type command.
> >
> > NEIL
> >
> > "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote
in message
> > news:eWgPBBKQHHA.2140@.TK2MSFTNGP03.phx.gbl...
> >> If I type DBCC followed by anything, I get dbcc is not recognised ...
> >>
> >> What application are you using to execute the DBCC command?
> >>
> >> --
> >> Tibor Karaszi, SQL Server MVP
> >> http://www.karaszi.com/sqlserver/default.asp
> >> http://www.solidqualitylearning.com/
> >>
> >>
> >> "Neil Jarman" <neil@.tNOiSPAMvPLEASEy.co.uk> wrote in message
> >> news:epaec9$p5e$1$830fa7b3@.news.demon.co.uk...
> >> Thanks for both your replies.
> >>
> >> I have bvacked up log - using EM Backup Database... and this has
reduced the actual used size
> >> even miore, however it still occupies 5Gb of disk space.
> >>
> >> I can't seem to use the DBCC commands described in the article
supplied, can you tell me how to
> >> do this? If I type DBCC followed by anything, I get dbcc is not
recognised ...
> >>
> >> cheers,
> >>
> >> NEIL
> >>
> >>
> >
> >
>
Reducing physical transaction log file size.
log. I ran the shrink database option under the System Manager and was able
to shring the logical size of the log file, however, I would also like to
reduce the physical size of the file to free up disk space. I have tried to
run the DBCC Shrinkfile and DBCC shrinkdatabase commands but that does not
appear to be successfull. Can anybody help? I am running SQL 2000. Thanks
.http://www.karaszi.com/SQLServer/info_dont_shrink.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Elbryyan" <Elbryyan@.discussions.microsoft.com> wrote in message
news:67FAF140-014E-4CB2-800E-740AF1B69212@.microsoft.com...[vbcol=seagreen]
>I need some help to reduce the physical file size of a database transaction
> log. I ran the shrink database option under the System Manager and was ab
le
> to shring the logical size of the log file, however, I would also like to
> reduce the physical size of the file to free up disk space. I have tried
to
> run the DBCC Shrinkfile and DBCC shrinkdatabase commands but that does not
> appear to be successfull. Can anybody help? I am running SQL 2000. Thanks.[/vbc
ol]
Reducing many-to-many to one-to-one
I have the following three tables below containing Resources,
Categories and a link table so each Resource can belong to one or more
Categories. I would like to create a view (ResourceID, ResourceName,
CategoryID, CategoryName) that includes one row for each Resource with
just one of the Categories that it belongs to.
Resource table
- ResourceID
- ResourceName
- etc..
Category table
- CategoryID
- CategoryName
- etc..
ResourceCategory table
- ResourceID
- CategoryID
Can anyone help? Thanks.On Fri, 14 Sep 2007 14:43:33 -0000, nick@.nova5.net wrote:
Quote:
Originally Posted by
>Hi,
>
>I have the following three tables below containing Resources,
>Categories and a link table so each Resource can belong to one or more
>Categories. I would like to create a view (ResourceID, ResourceName,
>CategoryID, CategoryName) that includes one row for each Resource with
>just one of the Categories that it belongs to.
Hi Nick,
That's only possible if you somehow specify WHICH of the categories you
want. Lowest CategoryID? Longest CategoryName? CategoryID closest to a
multiple of 42?
I'm sure that you don't care, but since SQL Server has no "just give me
any I don't care which" operator, you'll have to specify something.
Assuming you want the lowest CategoryID, you can use the following on
SQL Server 7.0 and up (on SQL 2005, you might find a better solution
with CTEs and CROSS APPLY, but since you failed to specify the version,
I'll play it safe):
CREATE VIEW YourShinyNewView
AS
SELECT r.ResourceID, r.ResourceName,
c.CategoryID, c.CategoryName
FROM Resource AS r
INNER JOIN (SELECT Resource, MIN(CategoryID) AS MinCategoryID
FROM ResourceCategory
GROUP BY Resource) AS rc
ON rc.ResourceID = r.ResourceID
INNER JOIN Category AS c
ON c.CategoryID = rc.MinCategoryID;
(Untested - see www.aspfaq.com/5006 if you prefer a tested reply)
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||Hi Hugo,
Thanks for help, just what I needed. I was trying to put a 'top 1' in
the nested select which obviously (now!) didn't work..
Nick.
Quote:
Originally Posted by
Hi Nick,
>
That's only possible if you somehow specify WHICH of the categories you
want. Lowest CategoryID? Longest CategoryName? CategoryID closest to a
multiple of 42?
>
I'm sure that you don't care, but since SQL Server has no "just give me
any I don't care which" operator, you'll have to specify something.
>
Assuming you want the lowest CategoryID, you can use the following on
SQL Server 7.0 and up (on SQL 2005, you might find a better solution
with CTEs and CROSS APPLY, but since you failed to specify the version,
I'll play it safe):
>
CREATE VIEW YourShinyNewView
AS
SELECT r.ResourceID, r.ResourceName,
c.CategoryID, c.CategoryName
FROM Resource AS r
INNER JOIN (SELECT Resource, MIN(CategoryID) AS MinCategoryID
FROM ResourceCategory
GROUP BY Resource) AS rc
ON rc.ResourceID = r.ResourceID
INNER JOIN Category AS c
ON c.CategoryID = rc.MinCategoryID;
>
(Untested - seewww.aspfaq.com/5006if you prefer a tested reply)
>
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog:http://sqlblog.com/blogs/hugo_kornelis
Reducing log (ldf) file size
database backup. How can I reduce the size of my log file? Do I need to
back it up separately? Thanks.
David
DBCC SHRINKFILE(databasename_log, 0)
This will reduce the log file size to 504 KB.
However, if you don't change your current recovery model and/or the behavior
of your apps, the log file is just going to grow again between backups. So
the above is only a temporary relief.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"David C" <dlchase@.lifetimeinc.com> wrote in message
news:uC#FtmiLFHA.1392@.TK2MSFTNGP10.phx.gbl...
> I thought that the size of the log file went down when you did a full
> database backup. How can I reduce the size of my log file? Do I need to
> back it up separately? Thanks.
> David
>
|||You are right. I thought I had set it to "Simple" but was set to
"Full". Thanks Aaron.
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
|||and addtionally to Aarons reply, you need to backup the log seperately
(...look up BACKUP LOG xxxx...in Books On Line..) to keep the log in shape.
Setting the recovery to Simple will limit your possibilities to recover data
in case of something happens to the database.
Regards
Steen
David wrote:
> You are right. I thought I had set it to "Simple" but was set to
> "Full". Thanks Aaron.
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
Reducing log (ldf) file size
database backup. How can I reduce the size of my log file? Do I need to
back it up separately? Thanks.
DavidDBCC SHRINKFILE(databasename_log, 0)
This will reduce the log file size to 504 KB.
However, if you don't change your current recovery model and/or the behavior
of your apps, the log file is just going to grow again between backups. So
the above is only a temporary relief.
--
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"David C" <dlchase@.lifetimeinc.com> wrote in message
news:uC#FtmiLFHA.1392@.TK2MSFTNGP10.phx.gbl...
> I thought that the size of the log file went down when you did a full
> database backup. How can I reduce the size of my log file? Do I need to
> back it up separately? Thanks.
> David
>
Reducing log (ldf) file size
database backup. How can I reduce the size of my log file? Do I need to
back it up separately? Thanks.
DavidDBCC SHRINKFILE(databasename_log, 0)
This will reduce the log file size to 504 KB.
However, if you don't change your current recovery model and/or the behavior
of your apps, the log file is just going to grow again between backups. So
the above is only a temporary relief.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"David C" <dlchase@.lifetimeinc.com> wrote in message
news:uC#FtmiLFHA.1392@.TK2MSFTNGP10.phx.gbl...
> I thought that the size of the log file went down when you did a full
> database backup. How can I reduce the size of my log file? Do I need to
> back it up separately? Thanks.
> David
>|||You are right. I thought I had set it to "Simple" but was set to
"Full". Thanks Aaron.
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!|||and addtionally to Aarons reply, you need to backup the log seperately
(...look up BACKUP LOG xxxx...in Books On Line..) to keep the log in shape.
Setting the recovery to Simple will limit your possibilities to recover data
in case of something happens to the database.
Regards
Steen
David wrote:
> You are right. I thought I had set it to "Simple" but was set to
> "Full". Thanks Aaron.
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
Reducing like returned data
(too much celebrating last night?) on the SQL term that is used to
eliminate multiples of like data when it is returned from the
database.
ie, instead of ...
red
blue
red
green
it would return ...
red
blue
green
Sorry for the trouble and thanks.DISTINCT or GROUP BY will do that for you.
Example:
SELECT DISTINCT colour
FROM ColourTable
--
David Portas
----
Please reply only to the newsgroup
--|||Thanks, David!
On Thu, 1 Jan 2004 23:47:08 -0000, "David Portas"
<REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote:
>DISTINCT or GROUP BY will do that for you.
>Example:
>SELECT DISTINCT colour
> FROM ColourTable
>--
>David Portas
>----
>Please reply only to the newsgroup
Reducing LDF files
If I understand it correctly, you only need an LDF file to restore to a point in time after the last full backup? If this is so, then Could the LDF file not be reduced in size on perfoming a full backup?
Most of the time it's not an issue as there is enough space on the HDD, but is it possible to reduce the ldf file size periodically (manually would be fine). Is changing the recovery mode from FULL to SIMPLE and then back to FULL an option?
If so, is anyone able to tell me how, exactly, I can do this? ... I've sifted through the documentaion to no avail ... :eek:
Many thanks
RobNo, you cannot just truncate the log when you take the full backup, because you do not know whether the error forcing you to restore the database will occur prior to or after the last full backup. You may leave the office at 4pm, at 5 pm some tables disappear. Backups are scheduled to run at midnight. If the logs are truncated when you arrive at the office the next morning, you cannot restore to point in time anymore.
Thus, the ONLY way you should free space in the logfiles is backing them up with the backup log statement, which frees up place internally in the log files.|||do not change recovery mode FULL to SIMPLE. it will break log chain.|||If you are just starting off with SQL Server administration, I would recommend using the maintenance plan wizard (if you are using SQL 2005, be sure to download and install SP2, which has significant improvements to maintenance plans in general).
General guidelines:
SQL 7.0/2000:
1. Create a maintenance plan that includes only the system databases (master, model, msdb). Do a full backup daily.
2. Create another maintenance plan that includes only the user databases. Do a full backup daily and a transaction log backup as often as you are comfortable (and as often as needed to keep the logs at a stable/reasonable size).
3. For both plans:
a. Back up over the network (if your network is stable), or straight to disk (separate from data/log files).
b. Retain full backups on disk for 2-3 days (more if you can afford it).
c. Retain log backups for as long as necessary to get back to your last good full backup.
c. Make sure there's another process that writes the db backups to tape.
SQL 2005
1. Create one plan for full backups for all databases
2. Create another plan for log backups for all user databases.
3. Follow other guidelines as above.
There are lots of tweaks and other things that you can do to improve backup performance, this is just a guide to get you started.
Be sure to also:
1. Practice recovery:
a. Practice full database restore
b. Practive database restore to point in time
c. Practice recovery from tape
d. Practice recovery from a dead server (ie, full rebuild/fresh install)
2. Validate your tape backups
3. Document your backup strategy and document your recovery plan.
Regards,
hmscott
Reducing filesize while deleting rows?
Thanks!
Use SqlCeEngine.Compact() or SqlCeEngine.Shrink() to do that. You can find description on MSDN.
|||Unfortunately I'm using native C++ for all this, so I don't have access to SqlCeEngine. Is there a SQL command i could execute to perform the equivalent to a shrink routine? If that doesn't exist I'll need some way to do it while an existing OLEDB connection is open and operating. I need this database to run 24/7 with no downtime whatsoever.Thanks for the help!
|||
The database connection must be closed to do shrink or repair. This is a design requirement so that database file structure can be recreated.
What you can probably do is to make a copy of the database , compact it, momentarily drop connection to original database and replace original with compacted database. You will have to take a read only lock on original db while the copy is compacted so that no new changes take place.
|||The OLEDB provider has an Engine object, which has a CompactDatabase method.
Reducing failover time
Hi All,
Is it possible to reduce the automatic failover time on MS SQL 2005? It seems to take around a minute on my servers.
Thanks for your help.
Regards,
1 Minute?, is so much time, can you post your configurations pls?...
John Bocachica
|||Failover time can depend on a lot of things.
Refer to http://www.microsoft.com/technet/prodtechnol/sql/2005/technologies/dbm_best_pract.mspx for a discussion on what happens during the automatic failover.
What type of failure are you talking about here?
What is load on the database? Take a look at the redo_queue on the mirror at the time of failure.
More details needed to answer your question.
Reducing Disk I/O in SQL Server configuration
We have a SQL server database in which the contents are updated for
every few seconds(it can be as low as 4 sec). Therefore the tablesize
will keep growing continuously. There is a problem in the table design
itself. It has been designed without any primary key. I know this will
make the insertion slower. If we rectify this, what else should we do
in the SQL Server side to reduce the Disk I/O access for the database.
The harddisk LED keeps flashing continuosly.
Thanks
Hi
Does the table have any indexes?
<vanisathish@.gmail.com> wrote in message
news:1123756244.000317.291730@.z14g2000cwz.googlegr oups.com...
> Hi All,
> We have a SQL server database in which the contents are updated for
> every few seconds(it can be as low as 4 sec). Therefore the tablesize
> will keep growing continuously. There is a problem in the table design
> itself. It has been designed without any primary key. I know this will
> make the insertion slower. If we rectify this, what else should we do
> in the SQL Server side to reduce the Disk I/O access for the database.
> The harddisk LED keeps flashing continuosly.
> Thanks
>
Reducing Disk I/O in SQL Server configuration
We have a SQL server database in which the contents are updated for
every few seconds(it can be as low as 4 sec). Therefore the tablesize
will keep growing continuously. There is a problem in the table design
itself. It has been designed without any primary key. I know this will
make the insertion slower. If we rectify this, what else should we do
in the SQL Server side to reduce the Disk I/O access for the database.
The harddisk LED keeps flashing continuosly.
ThanksHi
Does the table have any indexes?
<vanisathish@.gmail.com> wrote in message
news:1123756244.000317.291730@.z14g2000cwz.googlegroups.com...
> Hi All,
> We have a SQL server database in which the contents are updated for
> every few seconds(it can be as low as 4 sec). Therefore the tablesize
> will keep growing continuously. There is a problem in the table design
> itself. It has been designed without any primary key. I know this will
> make the insertion slower. If we rectify this, what else should we do
> in the SQL Server side to reduce the Disk I/O access for the database.
> The harddisk LED keeps flashing continuosly.
> Thanks
>
Reducing Disk I/O in SQL Server configuration
We have a SQL server database in which the contents are updated for
every few seconds(it can be as low as 4 sec). Therefore the tablesize
will keep growing continuously. There is a problem in the table design
itself. It has been designed without any primary key. I know this will
make the insertion slower. If we rectify this, what else should we do
in the SQL Server side to reduce the Disk I/O access for the database.
The harddisk LED keeps flashing continuosly.
ThanksHi
Does the table have any indexes?
<vanisathish@.gmail.com> wrote in message
news:1123756244.000317.291730@.z14g2000cwz.googlegroups.com...
> Hi All,
> We have a SQL server database in which the contents are updated for
> every few seconds(it can be as low as 4 sec). Therefore the tablesize
> will keep growing continuously. There is a problem in the table design
> itself. It has been designed without any primary key. I know this will
> make the insertion slower. If we rectify this, what else should we do
> in the SQL Server side to reduce the Disk I/O access for the database.
> The harddisk LED keeps flashing continuosly.
> Thanks
>
Reducing database size after dropping text column
A while back I dropped a text column from a SQL Server 7 database
roughly 3GB in size. I expected the size of the database to decrease
by around 1GB, but no change occurred. After searching usenet, I
discovered that SQL Server 7 has no way of reclaiming that space, but
that there is some command that can be run in SQL Server 2000 that
will reclaim it.
I have since migrated this database to SQL Server 2000, and am now
trying to figure out what that command is, but cannot locate any
usenet posts about it... also tried searching books online, but can't
find anything that way either.
Does anyone know what I should run?
Thanks,
TomAre you talking about DBCC Shrinkdatabase?
--
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
"Thomas" <thomas-ggl-01@.data.iatn.net> wrote in message
news:4f2cac50.0311060740.6e406624@.posting.google.c om...
> Hello,
> A while back I dropped a text column from a SQL Server 7 database
> roughly 3GB in size. I expected the size of the database to decrease
> by around 1GB, but no change occurred. After searching usenet, I
> discovered that SQL Server 7 has no way of reclaiming that space, but
> that there is some command that can be run in SQL Server 2000 that
> will reclaim it.
> I have since migrated this database to SQL Server 2000, and am now
> trying to figure out what that command is, but cannot locate any
> usenet posts about it... also tried searching books online, but can't
> find anything that way either.
> Does anyone know what I should run?
> Thanks,
> Tom|||I believe what your looking for is DBCC CLEANTABLE. Take a look in BOL for
more details. But as Kevin states this will not shrink the db, it just
reclaims the wasted space from the dropped text column.
--
Andrew J. Kelly
SQL Server MVP
"Thomas" <thomas-ggl-01@.data.iatn.net> wrote in message
news:4f2cac50.0311060740.6e406624@.posting.google.c om...
> Hello,
> A while back I dropped a text column from a SQL Server 7 database
> roughly 3GB in size. I expected the size of the database to decrease
> by around 1GB, but no change occurred. After searching usenet, I
> discovered that SQL Server 7 has no way of reclaiming that space, but
> that there is some command that can be run in SQL Server 2000 that
> will reclaim it.
> I have since migrated this database to SQL Server 2000, and am now
> trying to figure out what that command is, but cannot locate any
> usenet posts about it... also tried searching books online, but can't
> find anything that way either.
> Does anyone know what I should run?
> Thanks,
> Tom|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message news:<#C4GkAIpDHA.1948@.TK2MSFTNGP12.phx.gbl>...
> I believe what your looking for is DBCC CLEANTABLE. Take a look in BOL for
> more details. But as Kevin states this will not shrink the db, it just
> reclaims the wasted space from the dropped text column.
Thanks for the feedback guys. I ran DBCC CLEANTABLE on all the tables
in the database where the text columns were removed. No filesize
change as you indicated. I then did the 'shrinkdatabase' and even the
'shrinkfile' commands, and the database size is still almost 3GB, same
as the database that is on my SQL Server 7 machine. Any other ideas?
Thanks,
Tom|||Do you know for sure there is enough free space to effectively shrink the
db? Is SQL Server still reporting the size of that table to be what it was
before you dropped the columns? Do you have a clustered index on the table?
IF so you might want to do a DBREINDEX and see if that helps any.
--
Andrew J. Kelly
SQL Server MVP
"Thomas" <thomas-ggl-01@.data.iatn.net> wrote in message
news:4f2cac50.0311061632.17a824e@.posting.google.co m...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:<#C4GkAIpDHA.1948@.TK2MSFTNGP12.phx.gbl>...
> > I believe what your looking for is DBCC CLEANTABLE. Take a look in BOL
for
> > more details. But as Kevin states this will not shrink the db, it just
> > reclaims the wasted space from the dropped text column.
> Thanks for the feedback guys. I ran DBCC CLEANTABLE on all the tables
> in the database where the text columns were removed. No filesize
> change as you indicated. I then did the 'shrinkdatabase' and even the
> 'shrinkfile' commands, and the database size is still almost 3GB, same
> as the database that is on my SQL Server 7 machine. Any other ideas?
> Thanks,
> Tom|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message news:<#IWTi2MpDHA.2244@.TK2MSFTNGP12.phx.gbl>...
> Do you know for sure there is enough free space to effectively shrink the
> db? Is SQL Server still reporting the size of that table to be what it was
> before you dropped the columns? Do you have a clustered index on the table?
> IF so you might want to do a DBREINDEX and see if that helps any.
Hi Andrew,
Thanks for the help on this, it's quite bizarre.
There is definitely enough free space on the disk, if that is what you
mean. I didn't check the individual table sizes before I did the
cleantable commands, so I'm not sure if they are being reported as
different, but the size of the entire database is still the same or
even slightly larger! (?)
I do have a clustered index on all the tables that had text columns
dropped, and I performed the dbreindex on all those tables, then did
another 'shrinkdatabase', and the size of the db has not gone down at
all.
At one point I read another way to go about this is to use BCP, but
I've not used that before so I will have to do some research.
Thanks,
Thomas|||One other point that may be a factor here. You can only shrink the db and
log file down to the size it was originally created at and no more. So if
the db was created at 3GB you can run shrink all you want and nothing will
happen. Does Shrinkfile show any estimated pages that can be removed?
--
Andrew J. Kelly
SQL Server MVP
"Thomas" <thomas-ggl-01@.data.iatn.net> wrote in message
news:4f2cac50.0311081249.52f030cb@.posting.google.c om...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:<#IWTi2MpDHA.2244@.TK2MSFTNGP12.phx.gbl>...
> > Do you know for sure there is enough free space to effectively shrink
the
> > db? Is SQL Server still reporting the size of that table to be what it
was
> > before you dropped the columns? Do you have a clustered index on the
table?
> > IF so you might want to do a DBREINDEX and see if that helps any.
> Hi Andrew,
> Thanks for the help on this, it's quite bizarre.
> There is definitely enough free space on the disk, if that is what you
> mean. I didn't check the individual table sizes before I did the
> cleantable commands, so I'm not sure if they are being reported as
> different, but the size of the entire database is still the same or
> even slightly larger! (?)
> I do have a clustered index on all the tables that had text columns
> dropped, and I performed the dbreindex on all those tables, then did
> another 'shrinkdatabase', and the size of the db has not gone down at
> all.
> At one point I read another way to go about this is to use BCP, but
> I've not used that before so I will have to do some research.
> Thanks,
> Thomas
Reducing database size after dropping text column
A while back I dropped a text column from a SQL Server 7 database
roughly 3GB in size. I expected the size of the database to decrease
by around 1GB, but no change occurred. After searching usenet, I
discovered that SQL Server 7 has no way of reclaiming that space, but
that there is some command that can be run in SQL Server 2000 that
will reclaim it.
I have since migrated this database to SQL Server 2000, and am now
trying to figure out what that command is, but cannot locate any
usenet posts about it... also tried searching books online, but can't
find anything that way either.
Does anyone know what I should run?
Thanks,
TomAre you talking about DBCC Shrinkdatabase?
--
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
"Thomas" <thomas-ggl-01@.data.iatn.net> wrote in message
news:4f2cac50.0311060740.6e406624@.posting.google.com...
> Hello,
> A while back I dropped a text column from a SQL Server 7 database
> roughly 3GB in size. I expected the size of the database to decrease
> by around 1GB, but no change occurred. After searching usenet, I
> discovered that SQL Server 7 has no way of reclaiming that space, but
> that there is some command that can be run in SQL Server 2000 that
> will reclaim it.
> I have since migrated this database to SQL Server 2000, and am now
> trying to figure out what that command is, but cannot locate any
> usenet posts about it... also tried searching books online, but can't
> find anything that way either.
> Does anyone know what I should run?
> Thanks,
> Tom|||I believe what your looking for is DBCC CLEANTABLE. Take a look in BOL for
more details. But as Kevin states this will not shrink the db, it just
reclaims the wasted space from the dropped text column.
--
Andrew J. Kelly
SQL Server MVP
"Thomas" <thomas-ggl-01@.data.iatn.net> wrote in message
news:4f2cac50.0311060740.6e406624@.posting.google.com...
> Hello,
> A while back I dropped a text column from a SQL Server 7 database
> roughly 3GB in size. I expected the size of the database to decrease
> by around 1GB, but no change occurred. After searching usenet, I
> discovered that SQL Server 7 has no way of reclaiming that space, but
> that there is some command that can be run in SQL Server 2000 that
> will reclaim it.
> I have since migrated this database to SQL Server 2000, and am now
> trying to figure out what that command is, but cannot locate any
> usenet posts about it... also tried searching books online, but can't
> find anything that way either.
> Does anyone know what I should run?
> Thanks,
> Tom|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message news:<#C4GkAIpDHA.1948@.TK2MSFTNGP12.phx.gbl>...
> I believe what your looking for is DBCC CLEANTABLE. Take a look in BOL for
> more details. But as Kevin states this will not shrink the db, it just
> reclaims the wasted space from the dropped text column.
Thanks for the feedback guys. I ran DBCC CLEANTABLE on all the tables
in the database where the text columns were removed. No filesize
change as you indicated. I then did the 'shrinkdatabase' and even the
'shrinkfile' commands, and the database size is still almost 3GB, same
as the database that is on my SQL Server 7 machine. Any other ideas?
Thanks,
Tom|||Do you know for sure there is enough free space to effectively shrink the
db? Is SQL Server still reporting the size of that table to be what it was
before you dropped the columns? Do you have a clustered index on the table?
IF so you might want to do a DBREINDEX and see if that helps any.
--
Andrew J. Kelly
SQL Server MVP
"Thomas" <thomas-ggl-01@.data.iatn.net> wrote in message
news:4f2cac50.0311061632.17a824e@.posting.google.com...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:<#C4GkAIpDHA.1948@.TK2MSFTNGP12.phx.gbl>...
> > I believe what your looking for is DBCC CLEANTABLE. Take a look in BOL
for
> > more details. But as Kevin states this will not shrink the db, it just
> > reclaims the wasted space from the dropped text column.
> Thanks for the feedback guys. I ran DBCC CLEANTABLE on all the tables
> in the database where the text columns were removed. No filesize
> change as you indicated. I then did the 'shrinkdatabase' and even the
> 'shrinkfile' commands, and the database size is still almost 3GB, same
> as the database that is on my SQL Server 7 machine. Any other ideas?
> Thanks,
> Tom|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message news:<#IWTi2MpDHA.2244@.TK2MSFTNGP12.phx.gbl>...
> Do you know for sure there is enough free space to effectively shrink the
> db? Is SQL Server still reporting the size of that table to be what it was
> before you dropped the columns? Do you have a clustered index on the table?
> IF so you might want to do a DBREINDEX and see if that helps any.
Hi Andrew,
Thanks for the help on this, it's quite bizarre.
There is definitely enough free space on the disk, if that is what you
mean. I didn't check the individual table sizes before I did the
cleantable commands, so I'm not sure if they are being reported as
different, but the size of the entire database is still the same or
even slightly larger! (?)
I do have a clustered index on all the tables that had text columns
dropped, and I performed the dbreindex on all those tables, then did
another 'shrinkdatabase', and the size of the db has not gone down at
all.
At one point I read another way to go about this is to use BCP, but
I've not used that before so I will have to do some research.
Thanks,
Thomas|||One other point that may be a factor here. You can only shrink the db and
log file down to the size it was originally created at and no more. So if
the db was created at 3GB you can run shrink all you want and nothing will
happen. Does Shrinkfile show any estimated pages that can be removed?
--
Andrew J. Kelly
SQL Server MVP
"Thomas" <thomas-ggl-01@.data.iatn.net> wrote in message
news:4f2cac50.0311081249.52f030cb@.posting.google.com...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:<#IWTi2MpDHA.2244@.TK2MSFTNGP12.phx.gbl>...
> > Do you know for sure there is enough free space to effectively shrink
the
> > db? Is SQL Server still reporting the size of that table to be what it
was
> > before you dropped the columns? Do you have a clustered index on the
table?
> > IF so you might want to do a DBREINDEX and see if that helps any.
> Hi Andrew,
> Thanks for the help on this, it's quite bizarre.
> There is definitely enough free space on the disk, if that is what you
> mean. I didn't check the individual table sizes before I did the
> cleantable commands, so I'm not sure if they are being reported as
> different, but the size of the entire database is still the same or
> even slightly larger! (?)
> I do have a clustered index on all the tables that had text columns
> dropped, and I performed the dbreindex on all those tables, then did
> another 'shrinkdatabase', and the size of the db has not gone down at
> all.
> At one point I read another way to go about this is to use BCP, but
> I've not used that before so I will have to do some research.
> Thanks,
> Thomas
Reducing Database Size (contains images)
Do you use READTEXT WRITETEXT alot?
what's the size of a average image?|||The average size is around 680k. I do not use readtext and writetext a lot. How do i go about storing the path instead of the actual image?|||Someone suggested a UNC path to the images. What is this?|||UNC = Universal Naming Convention
\\servername\sharename\pathname\filename.img
so you can place the images anywhere|||is there an easy way to do this if I have around 360 images that need to be moved? What do i put in the table's data field|||UNC=\\SERVER\SHARE\FILENAME.EXT
There are kudos of reasons to store images in the database itself. It all depends on the app itself, and what is actually stored in those images. I prefer to use image and (n)text fields, because that's what a database is for. And if there is anything that is associated with the business through your app, - its place is in the database. The question comes how this info is being used. For example, in healthcare industry it becomes more and more practical to do high-speed scanning of claims. Scanned images are burned onto media of choice. Some organizations are also storing those images into database (TIFF format), others go even further and apply OCR technology to avoid data entry process. Key elements of the claim image are stored in related tables to meet search needs. The only time the image field is accessed is when there is a need to view the original claim. Will it make sense to store a path to a TIFF file instead of the file itself? Does the database have any control over OS environment? NO!!! In other cases Full-Text Search capability is used if the type of file being stored meets the parsing and filtering capabilities of the search engine (Word documents for example.) Whoever told you that "too much binary data is not good" is operating under unqualified assumptions. You need to look into the nature of the business usage of your app and determine whether it does or does not make sense to rely on database for safe-keeping of your images (I think the answer is pretty clear, hey?!)|||And the debate starts again...
If you have significant volume...I'd say it's an issue...
Hell they went out and bought kodak imagining platter to handle everything...it's massive...
no way a db could handle this volume...
There's also the pain of reaind and writing in chunks (how do you update a chunk again? Is it the whole thing?)
How big are the images?
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=36026|||He told you, - 680K|||My problem is that the database itself is far too large, and it is 99% due to these images. They are simply photos of employees and there are around 360 of them. We use them on a web site where once a name is typed in, info about that person is displayed; background info, history with the company, sales numbers, the photo, etc. How do i go about creating this UNC to the images? Where do you recommend i save this pictures? I appreciate any help.|||So, are we talking about less than 250MB worth of images? If this size is taken by 1% of your structure, then your database is less than 500MB all together. Why are you concerned about this? Why are you calling it "a problem?"|||What is far too large?
If the pix are 1 mb you only be saving 360 mb
That's not big
reducing compilations per second?
things you can do to reduce the compilations per second? I'm running SQL
Server 2000 and I'm getting anywhere from 114 to 200 compilations per
second, and obviously my CPU is pegged when it hits that higher end. What I
do know that is supposed to help:
1. upgrade to 2005
2. increase physical memory
3. increase minimum server memory setting
4. use a single database instead of several identical ones.
5. possibly change the code to use the full database path including the
owner for table names
Anything else?
thanks,
Coryhttp://www.sql-server-performance.com/tips/sql_server_performance_monitor_coutners_p1.aspx
SQL compilations of Transact-SQL code is a normal part of SQL Server's
operation. But because compilations chew up CPU and other resources, SQL
attempts to reuse as many execution plans in cache as possible (execution
plans are created when compilations occur). The more execution plans are
reused, the less overhead there is on the server, and the faster overall
performance there is.
To find out how many compilations SQL Server is doing, you can monitor the
SQLServer: SQL Statistics: SQL Compilations/Sec counter. As you would
expect, this measures how many compilations are performed by SQL Server per
second.
Generally speaking, if this figure is over 100 compilations per second, then
you may be experiencing unnecessary compilation overhead. A high number such
as this might indicate that you server is just very busy, or it could mean
that unnecessary compilations are being performed. For example, compilations
can be forced by SQL Server if object schema changes, if previously
parallelized execution plans have to run serially, if statistics are
recomputed, or if a number of other things occur. In some cases, you have
the power to reduce the number of unnecessary compilations. See this page
for tips on how to do this.
http://www.sql-server-performance.com/tips/stored_procedures.asp
If you find that your server is performing over 100 compilations per second,
you should take the time to investigate if the cause of this is something
that you can control. Too many compilations will hurt your SQL Server's
performance. [7.0, 2000] Updated 9-4-2006
Geoff Chovaz
MCTS: SQL Server 2005
MCITP: Database Administrator
MCITP: Database Developer
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
> 2. increase physical memory
> 3. increase minimum server memory setting
> 4. use a single database instead of several identical ones.
> 5. possibly change the code to use the full database path including the
> owner for table names
>
> Anything else?
>
> thanks,
> Cory
>
>|||correct link
http://www.sql-server-performance.com/tips/stored_procedures_p1.aspx
Geoff Chovaz
MCTS: SQL Server 2005
MCITP: Database Administrator
MCITP: Database Developer
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
> 2. increase physical memory
> 3. increase minimum server memory setting
> 4. use a single database instead of several identical ones.
> 5. possibly change the code to use the full database path including the
> owner for table names
>
> Anything else?
>
> thanks,
> Cory
>
>|||Also, did you run a trace and see where the compilations are coming from?
What queries are being executed and by whom?
--
Geoff Chovaz
MCTS: SQL Server 2005
MCITP: Database Administrator
MCITP: Database Developer
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
> 2. increase physical memory
> 3. increase minimum server memory setting
> 4. use a single database instead of several identical ones.
> 5. possibly change the code to use the full database path including the
> owner for table names
>
> Anything else?
>
> thanks,
> Cory
>
>|||See in-line
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
This can help as it can compile at the statement level and if you have
lots of adhoc code (which it sounds like you do) you can use the FORCE
parameterization option.
> 2. increase physical memory
This may or may not help. If the code is simply not reusable this may
make matters worse.
> 3. increase minimum server memory setting
This will not help unless you have other apps on the same server. In
that case think about removing them.
> 4. use a single database instead of several identical ones.
Not a good way to go to fix compilation issues. Optimize the code first.
> 5. possibly change the code to use the full database path including the
> owner for table names
That is not the full path. FULL would be DB.Owner.Object. But you are
on the right track. ALWAYS owner qualify your objects.
> Anything else?
Yes, optimize the code so the queries can be reused. Have a look here:
http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx
>
> thanks,
> Cory
>
>|||1) Are you running stored procs or ADO/ADONET code? Some quick tips for
sproc compilations:
a) reduce temp table usage (table variables maybe, or combine statements to
eliminate temp storage needs altogether)
b) create all temp objects at top of sproc
c) remove 'bad' SET statements from sprocs. see BOL/MS site for these.
Reducing compilations from ADO/ADONET code usually requires rewriting the
code to use more set based logic. As Andy Kelly said, on 2005 you can use
the FORCED PARAMETERIZATION database setting. I have a client that saw a
30-40% throughput improvement on their bad ADO code using that setting.
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
> 2. increase physical memory
> 3. increase minimum server memory setting
> 4. use a single database instead of several identical ones.
> 5. possibly change the code to use the full database path including the
> owner for table names
>
> Anything else?
>
> thanks,
> Cory
>
>|||We are running nearly 100% ColdFusion code. We aren't using temp tables
even in the few stored procs that we have. We are currently rewriting the
most popular apps on our sites to use parameters on everything. We did look
into the cache on the server using some of the system stored procedures and
we realized that the ones that were parameterized are getting used over 200
times, while the ad-hoc queries are compiled separately with every hit, and
only have one hit on them.
Also, I posted this elsewhere:
One thing that we are still very confused about is how the initial lookup
actually works. We're mainly wondering if two similar queries that work
identically can share the same execution plan in the cache. For example, if
you ran this:
<cfquery>
Select field from table where ID = <CFQUERYPARAM value="1">
</cfquery>
Then on a different page that you wrote before you knew all about code reuse
in CFC's and stuff, you ran this:
<cfquery>
SELECT field
FROM table
WHERE id = <CFQUERYPARAM value="1">
</cfquery>
Same thing, but formatted differently. Will they both have to be compiled
into two different execution plans, or is SQL Server smart enough to use the
same plan in the cache? Is it a simple text lookup, or are formatting
characters stripped and it reduced down to bytecode or something? I have no
idea.
Thanks,
Cory
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:13fkrvpprbubr9a@.corp.supernews.com...
> 1) Are you running stored procs or ADO/ADONET code? Some quick tips for
> sproc compilations:
> a) reduce temp table usage (table variables maybe, or combine statements
> to eliminate temp storage needs altogether)
> b) create all temp objects at top of sproc
> c) remove 'bad' SET statements from sprocs. see BOL/MS site for these.
> Reducing compilations from ADO/ADONET code usually requires rewriting the
> code to use more set based logic. As Andy Kelly said, on 2005 you can use
> the FORCED PARAMETERIZATION database setting. I have a client that saw a
> 30-40% throughput improvement on their bad ADO code using that setting.
>
> "Cory Harrison" <charrison@.csiweb.com> wrote in message
> news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
>> Does anybody have a good list or know of a good website that shows
>> various things you can do to reduce the compilations per second? I'm
>> running SQL Server 2000 and I'm getting anywhere from 114 to 200
>> compilations per second, and obviously my CPU is pegged when it hits that
>> higher end. What I do know that is supposed to help:
>> 1. upgrade to 2005
>> 2. increase physical memory
>> 3. increase minimum server memory setting
>> 4. use a single database instead of several identical ones.
>> 5. possibly change the code to use the full database path including the
>> owner for table names
>>
>> Anything else?
>>
>> thanks,
>> Cory
>>
>|||If you don't parameterize your queries (which can be identified using a profiler trace), then SQL
Server does the lookup using a checksum of the whole query string. i.e., it is
"everything"-sensitive.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OqRkKMRAIHA.4164@.TK2MSFTNGP06.phx.gbl...
> We are running nearly 100% ColdFusion code. We aren't using temp tables even in the few stored
> procs that we have. We are currently rewriting the most popular apps on our sites to use
> parameters on everything. We did look into the cache on the server using some of the system
> stored procedures and we realized that the ones that were parameterized are getting used over 200
> times, while the ad-hoc queries are compiled separately with every hit, and only have one hit on
> them.
>
> Also, I posted this elsewhere:
> One thing that we are still very confused about is how the initial lookup actually works. We're
> mainly wondering if two similar queries that work identically can share the same execution plan in
> the cache. For example, if you ran this:
>
> <cfquery>
> Select field from table where ID = <CFQUERYPARAM value="1">
> </cfquery>
>
> Then on a different page that you wrote before you knew all about code reuse in CFC's and stuff,
> you ran this:
>
>
> <cfquery>
> SELECT field
> FROM table
> WHERE id = <CFQUERYPARAM value="1">
> </cfquery>
>
>
>
> Same thing, but formatted differently. Will they both have to be compiled into two different
> execution plans, or is SQL Server smart enough to use the same plan in the cache? Is it a simple
> text lookup, or are formatting characters stripped and it reduced down to bytecode or something?
> I have no idea.
>
>
> Thanks,
> Cory
>
>
>
>
>
>
> "TheSQLGuru" <kgboles@.earthlink.net> wrote in message news:13fkrvpprbubr9a@.corp.supernews.com...
>> 1) Are you running stored procs or ADO/ADONET code? Some quick tips for sproc compilations:
>> a) reduce temp table usage (table variables maybe, or combine statements to eliminate temp
>> storage needs altogether)
>> b) create all temp objects at top of sproc
>> c) remove 'bad' SET statements from sprocs. see BOL/MS site for these.
>> Reducing compilations from ADO/ADONET code usually requires rewriting the code to use more set
>> based logic. As Andy Kelly said, on 2005 you can use the FORCED PARAMETERIZATION database
>> setting. I have a client that saw a 30-40% throughput improvement on their bad ADO code using
>> that setting.
>>
>> "Cory Harrison" <charrison@.csiweb.com> wrote in message
>> news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
>> Does anybody have a good list or know of a good website that shows various things you can do to
>> reduce the compilations per second? I'm running SQL Server 2000 and I'm getting anywhere from
>> 114 to 200 compilations per second, and obviously my CPU is pegged when it hits that higher end.
>> What I do know that is supposed to help:
>> 1. upgrade to 2005
>> 2. increase physical memory
>> 3. increase minimum server memory setting
>> 4. use a single database instead of several identical ones.
>> 5. possibly change the code to use the full database path including the owner for table names
>>
>> Anything else?
>>
>> thanks,
>> Cory
>>
>>
>|||Have a look at these to see how this works and how to address it.
http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx
http://support.microsoft.com/kb/243588
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OqRkKMRAIHA.4164@.TK2MSFTNGP06.phx.gbl...
> We are running nearly 100% ColdFusion code. We aren't using temp tables
> even in the few stored procs that we have. We are currently rewriting the
> most popular apps on our sites to use parameters on everything. We did
> look into the cache on the server using some of the system stored
> procedures and we realized that the ones that were parameterized are
> getting used over 200 times, while the ad-hoc queries are compiled
> separately with every hit, and only have one hit on them.
>
> Also, I posted this elsewhere:
> One thing that we are still very confused about is how the initial lookup
> actually works. We're mainly wondering if two similar queries that work
> identically can share the same execution plan in the cache. For example,
> if you ran this:
>
> <cfquery>
> Select field from table where ID = <CFQUERYPARAM value="1">
> </cfquery>
>
> Then on a different page that you wrote before you knew all about code
> reuse in CFC's and stuff, you ran this:
>
>
> <cfquery>
> SELECT field
> FROM table
> WHERE id = <CFQUERYPARAM value="1">
> </cfquery>
>
>
>
> Same thing, but formatted differently. Will they both have to be compiled
> into two different execution plans, or is SQL Server smart enough to use
> the same plan in the cache? Is it a simple text lookup, or are formatting
> characters stripped and it reduced down to bytecode or something? I have
> no idea.
>
>
> Thanks,
> Cory
>
>
>
>
>
>
> "TheSQLGuru" <kgboles@.earthlink.net> wrote in message
> news:13fkrvpprbubr9a@.corp.supernews.com...
>> 1) Are you running stored procs or ADO/ADONET code? Some quick tips for
>> sproc compilations:
>> a) reduce temp table usage (table variables maybe, or combine statements
>> to eliminate temp storage needs altogether)
>> b) create all temp objects at top of sproc
>> c) remove 'bad' SET statements from sprocs. see BOL/MS site for these.
>> Reducing compilations from ADO/ADONET code usually requires rewriting the
>> code to use more set based logic. As Andy Kelly said, on 2005 you can
>> use the FORCED PARAMETERIZATION database setting. I have a client that
>> saw a 30-40% throughput improvement on their bad ADO code using that
>> setting.
>>
>> "Cory Harrison" <charrison@.csiweb.com> wrote in message
>> news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
>> Does anybody have a good list or know of a good website that shows
>> various things you can do to reduce the compilations per second? I'm
>> running SQL Server 2000 and I'm getting anywhere from 114 to 200
>> compilations per second, and obviously my CPU is pegged when it hits
>> that higher end. What I do know that is supposed to help:
>> 1. upgrade to 2005
>> 2. increase physical memory
>> 3. increase minimum server memory setting
>> 4. use a single database instead of several identical ones.
>> 5. possibly change the code to use the full database path including the
>> owner for table names
>>
>> Anything else?
>>
>> thanks,
>> Cory
>>
>>
>
Reducing column size
from varchar(2048) to varchar(900), basically so I can create an index
on the column to gain much improved performance.
To do this, I am executing the following statement:
ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
Executing this in query analyzer results in the following error:
Server: Msg 8152, Level 16, State 9, Line 1
String or binary data would be truncated.
The statement has been terminated.
If I make this change via Enterprise Manager, it appears as a warning
and I can click OK to ignore and make the change. However, in query
analyzer this is treated as an error and the statement is not
performed.
What i want is to basically ignore this error and make the change
regardless. Is this possible?
It basically tell you the data length in that column is more then varchar(900).
You can reduce the length of a column of the length less the length of the
data in that column.
review the data in that column which more the 900 char
Cheers
"jasonatkins2001@.hotmail.com" wrote:
> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?
>
|||Notice that you have data that is longer then 900 bytes and by forcing
the modification you will lose data.
The reason that the EM does it and the alter statement in the QA
doesn't work is that the EM doesn't really runs alter table statement.
Instead the EM creates a new table and then runs insert select from the
source table to the new table. Then it drops the old table, renames
the new table, and creates the table constraints and the references
from other tables to the new table.
Also notice that even if you'll have an index on a column that has 900
bytes, the index won't be very effective.
Adi
jasonatkins2001@.hotmail.com wrote:
> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?
Reducing column size
from varchar(2048) to varchar(900), basically so I can create an index
on the column to gain much improved performance.
To do this, I am executing the following statement:
ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
Executing this in query analyzer results in the following error:
Server: Msg 8152, Level 16, State 9, Line 1
String or binary data would be truncated.
The statement has been terminated.
If I make this change via Enterprise Manager, it appears as a warning
and I can click OK to ignore and make the change. However, in query
analyzer this is treated as an error and the statement is not
performed.
What i want is to basically ignore this error and make the change
regardless. Is this possible?It basically tell you the data length in that column is more then varchar(90
0).
You can reduce the length of a column of the length less the length of the
data in that column.
review the data in that column which more the 900 char
Cheers
"jasonatkins2001@.hotmail.com" wrote:
> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?
>|||Notice that you have data that is longer then 900 bytes and by forcing
the modification you will lose data.
The reason that the EM does it and the alter statement in the QA
doesn't work is that the EM doesn't really runs alter table statement.
Instead the EM creates a new table and then runs insert select from the
source table to the new table. Then it drops the old table, renames
the new table, and creates the table constraints and the references
from other tables to the new table.
Also notice that even if you'll have an index on a column that has 900
bytes, the index won't be very effective.
Adi
jasonatkins2001@.hotmail.com wrote:
> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?