Showing posts with label reduce. Show all posts
Showing posts with label reduce. Show all posts

Wednesday, March 7, 2012

reducing tempdb

I am trying to reduce the size of my tempdb using the DBCC SHRINKFILE command and get the following error.

DBCC SHRINKFILE: Page 1:1164376 could not be moved because it is a work table page.

How can I get around this so I can shrink the db?

Hi

Please check the following KBA.

http://support.microsoft.com/kb/307487

Jag

Saturday, February 25, 2012

Reducing physical transaction log file size.

I need some help to reduce the physical file size of a database transaction
log. I ran the shrink database option under the System Manager and was able
to shring the logical size of the log file, however, I would also like to
reduce the physical size of the file to free up disk space. I have tried to
run the DBCC Shrinkfile and DBCC shrinkdatabase commands but that does not
appear to be successfull. Can anybody help? I am running SQL 2000. Thanks
.http://www.karaszi.com/SQLServer/info_dont_shrink.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Elbryyan" <Elbryyan@.discussions.microsoft.com> wrote in message
news:67FAF140-014E-4CB2-800E-740AF1B69212@.microsoft.com...[vbcol=seagreen]
>I need some help to reduce the physical file size of a database transaction
> log. I ran the shrink database option under the System Manager and was ab
le
> to shring the logical size of the log file, however, I would also like to
> reduce the physical size of the file to free up disk space. I have tried
to
> run the DBCC Shrinkfile and DBCC shrinkdatabase commands but that does not
> appear to be successfull. Can anybody help? I am running SQL 2000. Thanks.[/vbc
ol]

Reducing log (ldf) file size

I thought that the size of the log file went down when you did a full
database backup. How can I reduce the size of my log file? Do I need to
back it up separately? Thanks.
David
DBCC SHRINKFILE(databasename_log, 0)
This will reduce the log file size to 504 KB.
However, if you don't change your current recovery model and/or the behavior
of your apps, the log file is just going to grow again between backups. So
the above is only a temporary relief.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"David C" <dlchase@.lifetimeinc.com> wrote in message
news:uC#FtmiLFHA.1392@.TK2MSFTNGP10.phx.gbl...
> I thought that the size of the log file went down when you did a full
> database backup. How can I reduce the size of my log file? Do I need to
> back it up separately? Thanks.
> David
>
|||You are right. I thought I had set it to "Simple" but was set to
"Full". Thanks Aaron.
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
|||and addtionally to Aarons reply, you need to backup the log seperately
(...look up BACKUP LOG xxxx...in Books On Line..) to keep the log in shape.
Setting the recovery to Simple will limit your possibilities to recover data
in case of something happens to the database.
Regards
Steen
David wrote:
> You are right. I thought I had set it to "Simple" but was set to
> "Full". Thanks Aaron.
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!

Reducing log (ldf) file size

I thought that the size of the log file went down when you did a full
database backup. How can I reduce the size of my log file? Do I need to
back it up separately? Thanks.
DavidDBCC SHRINKFILE(databasename_log, 0)
This will reduce the log file size to 504 KB.
However, if you don't change your current recovery model and/or the behavior
of your apps, the log file is just going to grow again between backups. So
the above is only a temporary relief.
--
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"David C" <dlchase@.lifetimeinc.com> wrote in message
news:uC#FtmiLFHA.1392@.TK2MSFTNGP10.phx.gbl...
> I thought that the size of the log file went down when you did a full
> database backup. How can I reduce the size of my log file? Do I need to
> back it up separately? Thanks.
> David
>

Reducing log (ldf) file size

I thought that the size of the log file went down when you did a full
database backup. How can I reduce the size of my log file? Do I need to
back it up separately? Thanks.
DavidDBCC SHRINKFILE(databasename_log, 0)
This will reduce the log file size to 504 KB.
However, if you don't change your current recovery model and/or the behavior
of your apps, the log file is just going to grow again between backups. So
the above is only a temporary relief.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"David C" <dlchase@.lifetimeinc.com> wrote in message
news:uC#FtmiLFHA.1392@.TK2MSFTNGP10.phx.gbl...
> I thought that the size of the log file went down when you did a full
> database backup. How can I reduce the size of my log file? Do I need to
> back it up separately? Thanks.
> David
>|||You are right. I thought I had set it to "Simple" but was set to
"Full". Thanks Aaron.
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!|||and addtionally to Aarons reply, you need to backup the log seperately
(...look up BACKUP LOG xxxx...in Books On Line..) to keep the log in shape.
Setting the recovery to Simple will limit your possibilities to recover data
in case of something happens to the database.
Regards
Steen
David wrote:
> You are right. I thought I had set it to "Simple" but was set to
> "Full". Thanks Aaron.
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!

Reducing failover time

Hi All,

Is it possible to reduce the automatic failover time on MS SQL 2005? It seems to take around a minute on my servers.

Thanks for your help.

Regards,

1 Minute?, is so much time, can you post your configurations pls?...

John Bocachica

|||

Failover time can depend on a lot of things.

Refer to http://www.microsoft.com/technet/prodtechnol/sql/2005/technologies/dbm_best_pract.mspx for a discussion on what happens during the automatic failover.

What type of failure are you talking about here?

What is load on the database? Take a look at the redo_queue on the mirror at the time of failure.

More details needed to answer your question.

Reducing Database Size (contains images)

Hello. I am wondering how to effectively reduce the size of my database. After viewing the individual table sizes, I have come to realize that nearly 99% of the database's size is due to images. I am told that too much binary data is not good. How can I go about reducing the size of my database (possibly the images themselves)? I'd appreciate any help.store the path and filename instead of the image?

Do you use READTEXT WRITETEXT alot?

what's the size of a average image?|||The average size is around 680k. I do not use readtext and writetext a lot. How do i go about storing the path instead of the actual image?|||Someone suggested a UNC path to the images. What is this?|||UNC = Universal Naming Convention

\\servername\sharename\pathname\filename.img

so you can place the images anywhere|||is there an easy way to do this if I have around 360 images that need to be moved? What do i put in the table's data field|||UNC=\\SERVER\SHARE\FILENAME.EXT

There are kudos of reasons to store images in the database itself. It all depends on the app itself, and what is actually stored in those images. I prefer to use image and (n)text fields, because that's what a database is for. And if there is anything that is associated with the business through your app, - its place is in the database. The question comes how this info is being used. For example, in healthcare industry it becomes more and more practical to do high-speed scanning of claims. Scanned images are burned onto media of choice. Some organizations are also storing those images into database (TIFF format), others go even further and apply OCR technology to avoid data entry process. Key elements of the claim image are stored in related tables to meet search needs. The only time the image field is accessed is when there is a need to view the original claim. Will it make sense to store a path to a TIFF file instead of the file itself? Does the database have any control over OS environment? NO!!! In other cases Full-Text Search capability is used if the type of file being stored meets the parsing and filtering capabilities of the search engine (Word documents for example.) Whoever told you that "too much binary data is not good" is operating under unqualified assumptions. You need to look into the nature of the business usage of your app and determine whether it does or does not make sense to rely on database for safe-keeping of your images (I think the answer is pretty clear, hey?!)|||And the debate starts again...

If you have significant volume...I'd say it's an issue...

Hell they went out and bought kodak imagining platter to handle everything...it's massive...

no way a db could handle this volume...

There's also the pain of reaind and writing in chunks (how do you update a chunk again? Is it the whole thing?)

How big are the images?

http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=36026|||He told you, - 680K|||My problem is that the database itself is far too large, and it is 99% due to these images. They are simply photos of employees and there are around 360 of them. We use them on a web site where once a name is typed in, info about that person is displayed; background info, history with the company, sales numbers, the photo, etc. How do i go about creating this UNC to the images? Where do you recommend i save this pictures? I appreciate any help.|||So, are we talking about less than 250MB worth of images? If this size is taken by 1% of your structure, then your database is less than 500MB all together. Why are you concerned about this? Why are you calling it "a problem?"|||What is far too large?

If the pix are 1 mb you only be saving 360 mb

That's not big

reducing compilations per second?

Does anybody have a good list or know of a good website that shows various
things you can do to reduce the compilations per second? I'm running SQL
Server 2000 and I'm getting anywhere from 114 to 200 compilations per
second, and obviously my CPU is pegged when it hits that higher end. What I
do know that is supposed to help:
1. upgrade to 2005
2. increase physical memory
3. increase minimum server memory setting
4. use a single database instead of several identical ones.
5. possibly change the code to use the full database path including the
owner for table names
Anything else?
thanks,
Coryhttp://www.sql-server-performance.com/tips/sql_server_performance_monitor_coutners_p1.aspx
SQL compilations of Transact-SQL code is a normal part of SQL Server's
operation. But because compilations chew up CPU and other resources, SQL
attempts to reuse as many execution plans in cache as possible (execution
plans are created when compilations occur). The more execution plans are
reused, the less overhead there is on the server, and the faster overall
performance there is.
To find out how many compilations SQL Server is doing, you can monitor the
SQLServer: SQL Statistics: SQL Compilations/Sec counter. As you would
expect, this measures how many compilations are performed by SQL Server per
second.
Generally speaking, if this figure is over 100 compilations per second, then
you may be experiencing unnecessary compilation overhead. A high number such
as this might indicate that you server is just very busy, or it could mean
that unnecessary compilations are being performed. For example, compilations
can be forced by SQL Server if object schema changes, if previously
parallelized execution plans have to run serially, if statistics are
recomputed, or if a number of other things occur. In some cases, you have
the power to reduce the number of unnecessary compilations. See this page
for tips on how to do this.
http://www.sql-server-performance.com/tips/stored_procedures.asp
If you find that your server is performing over 100 compilations per second,
you should take the time to investigate if the cause of this is something
that you can control. Too many compilations will hurt your SQL Server's
performance. [7.0, 2000] Updated 9-4-2006
Geoff Chovaz
MCTS: SQL Server 2005
MCITP: Database Administrator
MCITP: Database Developer
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
> 2. increase physical memory
> 3. increase minimum server memory setting
> 4. use a single database instead of several identical ones.
> 5. possibly change the code to use the full database path including the
> owner for table names
>
> Anything else?
>
> thanks,
> Cory
>
>|||correct link
http://www.sql-server-performance.com/tips/stored_procedures_p1.aspx
Geoff Chovaz
MCTS: SQL Server 2005
MCITP: Database Administrator
MCITP: Database Developer
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
> 2. increase physical memory
> 3. increase minimum server memory setting
> 4. use a single database instead of several identical ones.
> 5. possibly change the code to use the full database path including the
> owner for table names
>
> Anything else?
>
> thanks,
> Cory
>
>|||Also, did you run a trace and see where the compilations are coming from?
What queries are being executed and by whom?
--
Geoff Chovaz
MCTS: SQL Server 2005
MCITP: Database Administrator
MCITP: Database Developer
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
> 2. increase physical memory
> 3. increase minimum server memory setting
> 4. use a single database instead of several identical ones.
> 5. possibly change the code to use the full database path including the
> owner for table names
>
> Anything else?
>
> thanks,
> Cory
>
>|||See in-line
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
This can help as it can compile at the statement level and if you have
lots of adhoc code (which it sounds like you do) you can use the FORCE
parameterization option.
> 2. increase physical memory
This may or may not help. If the code is simply not reusable this may
make matters worse.
> 3. increase minimum server memory setting
This will not help unless you have other apps on the same server. In
that case think about removing them.
> 4. use a single database instead of several identical ones.
Not a good way to go to fix compilation issues. Optimize the code first.
> 5. possibly change the code to use the full database path including the
> owner for table names
That is not the full path. FULL would be DB.Owner.Object. But you are
on the right track. ALWAYS owner qualify your objects.
> Anything else?
Yes, optimize the code so the queries can be reused. Have a look here:
http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx
>
> thanks,
> Cory
>
>|||1) Are you running stored procs or ADO/ADONET code? Some quick tips for
sproc compilations:
a) reduce temp table usage (table variables maybe, or combine statements to
eliminate temp storage needs altogether)
b) create all temp objects at top of sproc
c) remove 'bad' SET statements from sprocs. see BOL/MS site for these.
Reducing compilations from ADO/ADONET code usually requires rewriting the
code to use more set based logic. As Andy Kelly said, on 2005 you can use
the FORCED PARAMETERIZATION database setting. I have a client that saw a
30-40% throughput improvement on their bad ADO code using that setting.
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
> Does anybody have a good list or know of a good website that shows various
> things you can do to reduce the compilations per second? I'm running SQL
> Server 2000 and I'm getting anywhere from 114 to 200 compilations per
> second, and obviously my CPU is pegged when it hits that higher end. What
> I do know that is supposed to help:
> 1. upgrade to 2005
> 2. increase physical memory
> 3. increase minimum server memory setting
> 4. use a single database instead of several identical ones.
> 5. possibly change the code to use the full database path including the
> owner for table names
>
> Anything else?
>
> thanks,
> Cory
>
>|||We are running nearly 100% ColdFusion code. We aren't using temp tables
even in the few stored procs that we have. We are currently rewriting the
most popular apps on our sites to use parameters on everything. We did look
into the cache on the server using some of the system stored procedures and
we realized that the ones that were parameterized are getting used over 200
times, while the ad-hoc queries are compiled separately with every hit, and
only have one hit on them.
Also, I posted this elsewhere:
One thing that we are still very confused about is how the initial lookup
actually works. We're mainly wondering if two similar queries that work
identically can share the same execution plan in the cache. For example, if
you ran this:
<cfquery>
Select field from table where ID = <CFQUERYPARAM value="1">
</cfquery>
Then on a different page that you wrote before you knew all about code reuse
in CFC's and stuff, you ran this:
<cfquery>
SELECT field
FROM table
WHERE id = <CFQUERYPARAM value="1">
</cfquery>
Same thing, but formatted differently. Will they both have to be compiled
into two different execution plans, or is SQL Server smart enough to use the
same plan in the cache? Is it a simple text lookup, or are formatting
characters stripped and it reduced down to bytecode or something? I have no
idea.
Thanks,
Cory
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:13fkrvpprbubr9a@.corp.supernews.com...
> 1) Are you running stored procs or ADO/ADONET code? Some quick tips for
> sproc compilations:
> a) reduce temp table usage (table variables maybe, or combine statements
> to eliminate temp storage needs altogether)
> b) create all temp objects at top of sproc
> c) remove 'bad' SET statements from sprocs. see BOL/MS site for these.
> Reducing compilations from ADO/ADONET code usually requires rewriting the
> code to use more set based logic. As Andy Kelly said, on 2005 you can use
> the FORCED PARAMETERIZATION database setting. I have a client that saw a
> 30-40% throughput improvement on their bad ADO code using that setting.
>
> "Cory Harrison" <charrison@.csiweb.com> wrote in message
> news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
>> Does anybody have a good list or know of a good website that shows
>> various things you can do to reduce the compilations per second? I'm
>> running SQL Server 2000 and I'm getting anywhere from 114 to 200
>> compilations per second, and obviously my CPU is pegged when it hits that
>> higher end. What I do know that is supposed to help:
>> 1. upgrade to 2005
>> 2. increase physical memory
>> 3. increase minimum server memory setting
>> 4. use a single database instead of several identical ones.
>> 5. possibly change the code to use the full database path including the
>> owner for table names
>>
>> Anything else?
>>
>> thanks,
>> Cory
>>
>|||If you don't parameterize your queries (which can be identified using a profiler trace), then SQL
Server does the lookup using a checksum of the whole query string. i.e., it is
"everything"-sensitive.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OqRkKMRAIHA.4164@.TK2MSFTNGP06.phx.gbl...
> We are running nearly 100% ColdFusion code. We aren't using temp tables even in the few stored
> procs that we have. We are currently rewriting the most popular apps on our sites to use
> parameters on everything. We did look into the cache on the server using some of the system
> stored procedures and we realized that the ones that were parameterized are getting used over 200
> times, while the ad-hoc queries are compiled separately with every hit, and only have one hit on
> them.
>
> Also, I posted this elsewhere:
> One thing that we are still very confused about is how the initial lookup actually works. We're
> mainly wondering if two similar queries that work identically can share the same execution plan in
> the cache. For example, if you ran this:
>
> <cfquery>
> Select field from table where ID = <CFQUERYPARAM value="1">
> </cfquery>
>
> Then on a different page that you wrote before you knew all about code reuse in CFC's and stuff,
> you ran this:
>
>
> <cfquery>
> SELECT field
> FROM table
> WHERE id = <CFQUERYPARAM value="1">
> </cfquery>
>
>
>
> Same thing, but formatted differently. Will they both have to be compiled into two different
> execution plans, or is SQL Server smart enough to use the same plan in the cache? Is it a simple
> text lookup, or are formatting characters stripped and it reduced down to bytecode or something?
> I have no idea.
>
>
> Thanks,
> Cory
>
>
>
>
>
>
> "TheSQLGuru" <kgboles@.earthlink.net> wrote in message news:13fkrvpprbubr9a@.corp.supernews.com...
>> 1) Are you running stored procs or ADO/ADONET code? Some quick tips for sproc compilations:
>> a) reduce temp table usage (table variables maybe, or combine statements to eliminate temp
>> storage needs altogether)
>> b) create all temp objects at top of sproc
>> c) remove 'bad' SET statements from sprocs. see BOL/MS site for these.
>> Reducing compilations from ADO/ADONET code usually requires rewriting the code to use more set
>> based logic. As Andy Kelly said, on 2005 you can use the FORCED PARAMETERIZATION database
>> setting. I have a client that saw a 30-40% throughput improvement on their bad ADO code using
>> that setting.
>>
>> "Cory Harrison" <charrison@.csiweb.com> wrote in message
>> news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
>> Does anybody have a good list or know of a good website that shows various things you can do to
>> reduce the compilations per second? I'm running SQL Server 2000 and I'm getting anywhere from
>> 114 to 200 compilations per second, and obviously my CPU is pegged when it hits that higher end.
>> What I do know that is supposed to help:
>> 1. upgrade to 2005
>> 2. increase physical memory
>> 3. increase minimum server memory setting
>> 4. use a single database instead of several identical ones.
>> 5. possibly change the code to use the full database path including the owner for table names
>>
>> Anything else?
>>
>> thanks,
>> Cory
>>
>>
>|||Have a look at these to see how this works and how to address it.
http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx
http://support.microsoft.com/kb/243588
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Cory Harrison" <charrison@.csiweb.com> wrote in message
news:OqRkKMRAIHA.4164@.TK2MSFTNGP06.phx.gbl...
> We are running nearly 100% ColdFusion code. We aren't using temp tables
> even in the few stored procs that we have. We are currently rewriting the
> most popular apps on our sites to use parameters on everything. We did
> look into the cache on the server using some of the system stored
> procedures and we realized that the ones that were parameterized are
> getting used over 200 times, while the ad-hoc queries are compiled
> separately with every hit, and only have one hit on them.
>
> Also, I posted this elsewhere:
> One thing that we are still very confused about is how the initial lookup
> actually works. We're mainly wondering if two similar queries that work
> identically can share the same execution plan in the cache. For example,
> if you ran this:
>
> <cfquery>
> Select field from table where ID = <CFQUERYPARAM value="1">
> </cfquery>
>
> Then on a different page that you wrote before you knew all about code
> reuse in CFC's and stuff, you ran this:
>
>
> <cfquery>
> SELECT field
> FROM table
> WHERE id = <CFQUERYPARAM value="1">
> </cfquery>
>
>
>
> Same thing, but formatted differently. Will they both have to be compiled
> into two different execution plans, or is SQL Server smart enough to use
> the same plan in the cache? Is it a simple text lookup, or are formatting
> characters stripped and it reduced down to bytecode or something? I have
> no idea.
>
>
> Thanks,
> Cory
>
>
>
>
>
>
> "TheSQLGuru" <kgboles@.earthlink.net> wrote in message
> news:13fkrvpprbubr9a@.corp.supernews.com...
>> 1) Are you running stored procs or ADO/ADONET code? Some quick tips for
>> sproc compilations:
>> a) reduce temp table usage (table variables maybe, or combine statements
>> to eliminate temp storage needs altogether)
>> b) create all temp objects at top of sproc
>> c) remove 'bad' SET statements from sprocs. see BOL/MS site for these.
>> Reducing compilations from ADO/ADONET code usually requires rewriting the
>> code to use more set based logic. As Andy Kelly said, on 2005 you can
>> use the FORCED PARAMETERIZATION database setting. I have a client that
>> saw a 30-40% throughput improvement on their bad ADO code using that
>> setting.
>>
>> "Cory Harrison" <charrison@.csiweb.com> wrote in message
>> news:OrU0Mv6$HHA.1164@.TK2MSFTNGP02.phx.gbl...
>> Does anybody have a good list or know of a good website that shows
>> various things you can do to reduce the compilations per second? I'm
>> running SQL Server 2000 and I'm getting anywhere from 114 to 200
>> compilations per second, and obviously my CPU is pegged when it hits
>> that higher end. What I do know that is supposed to help:
>> 1. upgrade to 2005
>> 2. increase physical memory
>> 3. increase minimum server memory setting
>> 4. use a single database instead of several identical ones.
>> 5. possibly change the code to use the full database path including the
>> owner for table names
>>
>> Anything else?
>>
>> thanks,
>> Cory
>>
>>
>

Reducing column size

I want to reduce the length of a column in one of my database tables
from varchar(2048) to varchar(900), basically so I can create an index
on the column to gain much improved performance.
To do this, I am executing the following statement:
ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
Executing this in query analyzer results in the following error:
Server: Msg 8152, Level 16, State 9, Line 1
String or binary data would be truncated.
The statement has been terminated.
If I make this change via Enterprise Manager, it appears as a warning
and I can click OK to ignore and make the change. However, in query
analyzer this is treated as an error and the statement is not
performed.
What i want is to basically ignore this error and make the change
regardless. Is this possible?
It basically tell you the data length in that column is more then varchar(900).
You can reduce the length of a column of the length less the length of the
data in that column.
review the data in that column which more the 900 char
Cheers
"jasonatkins2001@.hotmail.com" wrote:

> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?
>
|||Notice that you have data that is longer then 900 bytes and by forcing
the modification you will lose data.
The reason that the EM does it and the alter statement in the QA
doesn't work is that the EM doesn't really runs alter table statement.
Instead the EM creates a new table and then runs insert select from the
source table to the new table. Then it drops the old table, renames
the new table, and creates the table constraints and the references
from other tables to the new table.
Also notice that even if you'll have an index on a column that has 900
bytes, the index won't be very effective.
Adi
jasonatkins2001@.hotmail.com wrote:
> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?

Reducing column size

I want to reduce the length of a column in one of my database tables
from varchar(2048) to varchar(900), basically so I can create an index
on the column to gain much improved performance.
To do this, I am executing the following statement:
ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
Executing this in query analyzer results in the following error:
Server: Msg 8152, Level 16, State 9, Line 1
String or binary data would be truncated.
The statement has been terminated.
If I make this change via Enterprise Manager, it appears as a warning
and I can click OK to ignore and make the change. However, in query
analyzer this is treated as an error and the statement is not
performed.
What i want is to basically ignore this error and make the change
regardless. Is this possible?It basically tell you the data length in that column is more then varchar(90
0).
You can reduce the length of a column of the length less the length of the
data in that column.
review the data in that column which more the 900 char
Cheers
"jasonatkins2001@.hotmail.com" wrote:

> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?
>|||Notice that you have data that is longer then 900 bytes and by forcing
the modification you will lose data.
The reason that the EM does it and the alter statement in the QA
doesn't work is that the EM doesn't really runs alter table statement.
Instead the EM creates a new table and then runs insert select from the
source table to the new table. Then it drops the old table, renames
the new table, and creates the table constraints and the references
from other tables to the new table.
Also notice that even if you'll have an index on a column that has 900
bytes, the index won't be very effective.
Adi
jasonatkins2001@.hotmail.com wrote:
> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?

Reducing column size

I want to reduce the length of a column in one of my database tables
from varchar(2048) to varchar(900), basically so I can create an index
on the column to gain much improved performance.
To do this, I am executing the following statement:
ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
Executing this in query analyzer results in the following error:
Server: Msg 8152, Level 16, State 9, Line 1
String or binary data would be truncated.
The statement has been terminated.
If I make this change via Enterprise Manager, it appears as a warning
and I can click OK to ignore and make the change. However, in query
analyzer this is treated as an error and the statement is not
performed.
What i want is to basically ignore this error and make the change
regardless. Is this possible?It basically tell you the data length in that column is more then varchar(900).
You can reduce the length of a column of the length less the length of the
data in that column.
review the data in that column which more the 900 char
Cheers
"jasonatkins2001@.hotmail.com" wrote:
> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?
>|||Notice that you have data that is longer then 900 bytes and by forcing
the modification you will lose data.
The reason that the EM does it and the alter statement in the QA
doesn't work is that the EM doesn't really runs alter table statement.
Instead the EM creates a new table and then runs insert select from the
source table to the new table. Then it drops the old table, renames
the new table, and creates the table constraints and the references
from other tables to the new table.
Also notice that even if you'll have an index on a column that has 900
bytes, the index won't be very effective.
Adi
jasonatkins2001@.hotmail.com wrote:
> I want to reduce the length of a column in one of my database tables
> from varchar(2048) to varchar(900), basically so I can create an index
> on the column to gain much improved performance.
> To do this, I am executing the following statement:
> ALTER TABLE myTable ALTER COLUMN description VARCHAR(900) NULL
> Executing this in query analyzer results in the following error:
> Server: Msg 8152, Level 16, State 9, Line 1
> String or binary data would be truncated.
> The statement has been terminated.
> If I make this change via Enterprise Manager, it appears as a warning
> and I can click OK to ignore and make the change. However, in query
> analyzer this is treated as an error and the statement is not
> performed.
> What i want is to basically ignore this error and make the change
> regardless. Is this possible?

Reduceing the filesize of datafiles

Hi,
in the enterprise manger I want to reduce the datafiles of a database. When
I do this the enterprise manager says,
that there are 80% (more then 17GB), that can be freed, but after the cleani
ng process the datafiles would not be
shorter then before.
How can I clean the freeable memory of the datafiles ?
Is there an other way that works better ?
Best regards
Axel LanserAlex
Look at DBCC SHRINKDATABASE or DBCC SHRINKFILE in the BOL.
"Axel Lanser" <anonymous@.discussions.microsoft.com> wrote in message
news:EDD50E53-D546-4F24-90EB-674C0172361C@.microsoft.com...
> Hi,
> in the enterprise manger I want to reduce the datafiles of a database.
When I do this the enterprise manager says,
> that there are 80% (more then 17GB), that can be freed, but after the
cleaning process the datafiles would not be
> shorter then before.
> How can I clean the freeable memory of the datafiles ?
> Is there an other way that works better ?
> Best regards
> Axel Lanser|||Hi,
Did you meant "How to reduce the datafile size", If yes then execute the
below command in query analyzer,
DBCC SHRINKFILE('logicalfilename','truncateon
ly')
If it is TX log file, perform a 'Backup LOG' before shrinking the file.
Refer the below link for more info.
http://www.sql-server-performance.c...p?TOPIC_ID=3140
Thanks
Hari
"Axel Lanser" <anonymous@.discussions.microsoft.com> wrote in message
news:EDD50E53-D546-4F24-90EB-674C0172361C@.microsoft.com...
> Hi,
> in the enterprise manger I want to reduce the datafiles of a database.
When I do this the enterprise manager says,
> that there are 80% (more then 17GB), that can be freed, but after the
cleaning process the datafiles would not be
> shorter then before.
> How can I clean the freeable memory of the datafiles ?
> Is there an other way that works better ?
> Best regards
> Axel Lanser|||Hi,
I have made a 'backup log' before I start the dbcc shrinkfile for the Log-Fi
le, but the Log-File wouldn't be reduce.
I think all active transaction must be proccessed, because the data base is
in single user mode and I was permanently
conected with this database and without any new transaction on this database
for two days.
Therfore I don't know why the Logfile is not reduceable.
Can you help me ?
What can I do ?
Thanks
Axel|||http://www.karaszi.com/sqlserver/info_dont_shrink.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Axel Lanser" <anonymous@.discussions.microsoft.com> wrote in message
news:B15B5790-C11F-4D51-8D0E-64FA48EB0B48@.microsoft.com...
> Hi,
> I have made a 'backup log' before I start the dbcc shrinkfile for the
Log-File, but the Log-File wouldn't be reduce.
> I think all active transaction must be proccessed, because the data base
is in single user mode and I was permanently
> conected with this database and without any new transaction on this
database for two days.
> Therfore I don't know why the Logfile is not reduceable.
> Can you help me ?
> What can I do ?
> Thanks
> Axel

Reduce view state size

We are having problems with SQL 200 Reporting services with large view states
being passed to the browser (up to 20Mb). Is there any way of reducing the
view state size used by reporting services, or any way of storing view state
in a database rather than having it passed to the browser with every request?
Thanks,
SJHHi SJH,
I would like to know does this issue appeared on a specified report? If so,
what does this report contained? Does it contain any large amount of text?
It's weird that your reporting services using so much ViewState. How did
you figure out this behavior? Have you got any error message?
Please let me know the information so that I can provide further assistance.
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
http://msdn.microsoft.com/subscriptions/support/default.aspx.
==================================================(This posting is provided "AS IS", with no warranties, and confers no
rights.)|||I've tried to checked on the exact circumstances for this problem, but can't
speak to the individual who reported the problem today.
The problem seems to occur when creating a new subscription for a report.
It does not happen on all reports. I'm not yet sure if it is isolated to a
single report.
S
"Wei Lu [MSFT]" wrote:
> Hi SJH,
> I would like to know does this issue appeared on a specified report? If so,
> what does this report contained? Does it contain any large amount of text?
> It's weird that your reporting services using so much ViewState. How did
> you figure out this behavior? Have you got any error message?
> Please let me know the information so that I can provide further assistance.
> Sincerely,
> Wei Lu
> Microsoft Online Community Support
> ==================================================> Get notification to my posts through email? Please refer to
> http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
> ications.
> Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
> where an initial response from the community or a Microsoft Support
> Engineer within 1 business day is acceptable. Please note that each follow
> up response may take approximately 2 business days as the support
> professional working with you may need further investigation to reach the
> most efficient resolution. The offering is not appropriate for situations
> that require urgent, real-time or phone-based interactions or complex
> project analysis and dump analysis issues. Issues of this nature are best
> handled working with a dedicated Microsoft Support Engineer by contacting
> Microsoft Customer Support Services (CSS) at
> http://msdn.microsoft.com/subscriptions/support/default.aspx.
> ==================================================> (This posting is provided "AS IS", with no warranties, and confers no
> rights.)
>|||Hi SJH,
Is the subscription a Data driven subscription or a regular subscription?
Have you got any error message when you create the subscription? What's the
report when you try to create the subscription?
Please let me know the result so that I can provide further assistance.
Sincerely,
Wei Lu
Microsoft Online Community Support|||Hi SJH,
How is everything going? Please let me know if you have any questions or
concerns. Thank you!
Sincerely,
Wei Lu
Microsoft Online Community Support

reduce trans log size; help needed

Hello,

I am hoping you can help me with the following problem; I need to process
the following steps every couple of hours in order to keep our Sql 2000
database a small as possible (the transaction log is 5x bigger than the db).

1.back-up the entire database
2.truncate the log
3.shrink the log
4.back-up once again.

As you may have determined, I am relatively new to managing a sql server
database and while I have found multiple articles online about the topics I
need to accomplish, I cannot find any actual examples that explain where I
input the coded used to accomplish the above-mentioned steps. I do
understand the theory behind the steps I just do not know how to accomplish
them!

If you know of a well-documented tutorial (Aside from Books Online (F1)),
please point me in the right direction.

Regards.First, create a backup device with whatever name you want. ( I assume you
know how to do this).
Might have to check the syntax on each of these but here goes

1. backup database [your_database_name] to [your_backup_device] with stats,
init
2. backup tran [your_database_name] to [your_backup_device]
3. dbcc shrinkfile 2,truncateonly
4. backup database [your_database_name] to [your_backup_device]

Step 3 above assumes that file 2 is the logfile for your database. You
might have to check this by issuing a select * from sysfiles (in query
analyzer) to make sure that file 2 is the log file.

Hope this helps.

Oscar...

> 1.back-up the entire database
> 2.truncate the log
> 3.shrink the log
> 4.back-up once again.
> As you may have determined, I am relatively new to managing a sql server
> database and while I have found multiple articles online about the topics
I
> need to accomplish, I cannot find any actual examples that explain where I
> input the coded used to accomplish the above-mentioned steps. I do
> understand the theory behind the steps I just do not know how to
accomplish
> them!

reduce trans log size; help needed

Hello,
I am hoping you can help me with the following problem; I need to process
the following steps every couple of hours in order to keep our Sql 2000
database a small as possible (the transaction log is 5x bigger than the db).
1.back-up the entire database
2.truncate the log
3.shrink the log
4.back-up once again.
As you may have determined, I am relatively new to managing a sql server
database and while I have found multiple articles online about the topics I
need to accomplish, I cannot find any actual examples that explain where I
input the coded used to accomplish the above-mentioned steps. I do
understand the theory behind the steps I just do not know how to accomplish
them!
If you know of a well-documented tutorial (Aside from Books Online (F1)),
please point me in the right direction.
Regards.| Hello,
|
| I am hoping you can help me with the following problem; I need to process
| the following steps every couple of hours in order to keep our Sql 2000
| database a small as possible (the transaction log is 5x bigger than the
db).
|
| 1.back-up the entire database
| 2.truncate the log
| 3.shrink the log
| 4.back-up once again.
|
| As you may have determined, I am relatively new to managing a sql server
| database and while I have found multiple articles online about the topics
I
| need to accomplish, I cannot find any actual examples that explain where I
| input the coded used to accomplish the above-mentioned steps. I do
| understand the theory behind the steps I just do not know how to
accomplish
| them!
--
You can create a job that executes the above steps every couple of hours.
You can define each of the above steps as actual job steps with the job.
You can use Enterprise Manager to create this job.
Hope this helps,
--
Eric Cárdenas
SQL Server support

Reduce time query in ApllicationName ".Net SqlClient Data Provider"

I use Report services with SQL server

I have a report with a table, and with a big query who contains 4 parameters.

If users choose some parameters the query is fast but sometime it s very slow...

In the SQL Server Profiler , I can see that is really the query who's very long .

With the ApllicationName ".Net SqlCLient Data Provider" the query during 5mn30

if i execute the same query with replaced parameters in SQL Sever Management , it s ok .

(with SQL Profiler : ApplicationName "Microsoft SQL Server Management" during time = 2s )

Someone can help me ? Is there anything to increment memory cache or something like that ? ...Let's take a look at your query.

Reduce time connection to AS2005

what is "slow"?
when the pivottable access the cube, the first time a query is executed
against the server to retrieve the measure names, but this query also ask
for the data, not only the labels. (use the profiler to see this query)
if 1 of your measures is a complex one, then you'll suffer a delay.
if your metadata is more complex in your cubes, then the time required to
retrieve the metadata is longer.
verify if you have the right SP level installed on the server AND the client
verify if you have activated the compression
also, it could be an authencation issue, maybe you suffer a delay due to a
network configuration issue (verify the DNS configuration if you use the
Active Direcotry)
again, use the profiler on the server to identify the source of the problem.
"Kazmane" <mkazmane@.capinfo.co.ma> wrote in message
news:%23y9YKXulGHA.1972@.TK2MSFTNGP05.phx.gbl...
> can someone help to reduce a time connecting to As2005 with using OLEDB9.0
> and PivotTable.
> thanks.
>can someone help to reduce a time connecting to As2005 with using OLEDB9.0
and PivotTable.
thanks.|||what is "slow"?
when the pivottable access the cube, the first time a query is executed
against the server to retrieve the measure names, but this query also ask
for the data, not only the labels. (use the profiler to see this query)
if 1 of your measures is a complex one, then you'll suffer a delay.
if your metadata is more complex in your cubes, then the time required to
retrieve the metadata is longer.
verify if you have the right SP level installed on the server AND the client
verify if you have activated the compression
also, it could be an authencation issue, maybe you suffer a delay due to a
network configuration issue (verify the DNS configuration if you use the
Active Direcotry)
again, use the profiler on the server to identify the source of the problem.
"Kazmane" <mkazmane@.capinfo.co.ma> wrote in message
news:%23y9YKXulGHA.1972@.TK2MSFTNGP05.phx.gbl...
> can someone help to reduce a time connecting to As2005 with using OLEDB9.0
> and PivotTable.
> thanks.
>

reduce the table height when NORows in table having a footer control

hi,
i am using two header ,three grouping level and in a table and there is
a one Report Footer in report.
when there is no data present then the report hide all groups and
Detail rows but the Report Footer still remains at same position. i
want that the Footer should also aligned to top when there is no data
,The space between the Header and Footer should zero in this case.
which property i need to set to do this. or what i shld do'
thx.Rajesh,
How far down is the footer?
I see there are 2 solutions in this case:
1) In the properties of a cell there is CanGrow and CanShrink
properties.(This is under Layout). Dont forget, your going to change
this for the DETAIL and grouping rows... not the footer one.
2) Also you can set all the middle rows to become Hidden. This can be
be done using an expressions that should basically check if data is
present ( check on some other row), and then the Visibiliy --> hidden
property should be set to true for all those detail/grouping rows.
I hope this helps.
regards,
Stas K.|||hi Scoredon,
I tried ur solution but its not working ,still my probelm is not
resolved.
my report format is like
TableHeader
TableHeader
Grouping I Fund
Grouping II Issuer
Grouping III Security
<Display Of Information>
Security/Facility Wise Total
Issuer Wise Total
Fund Wise Total
Grand Total
Page Footer
when there is no data then i get the structure like
TableHeader
TableHeader
Grand Total
<space>
<space>
<space>
<space>
<space>
<space>
<space>
Page Footer
I want the structure like this
TableHeader
TableHeader
Grand Total
Page Footer
I hope now you have understood my problem completely.plz try to resolve
this.
Thx
Rajesh
Sorcerdon wrote:
> Rajesh,
> How far down is the footer?
> I see there are 2 solutions in this case:
> 1) In the properties of a cell there is CanGrow and CanShrink
> properties.(This is under Layout). Dont forget, your going to change
> this for the DETAIL and grouping rows... not the footer one.
> 2) Also you can set all the middle rows to become Hidden. This can be
> be done using an expressions that should basically check if data is
> present ( check on some other row), and then the Visibiliy --> hidden
> property should be set to true for all those detail/grouping rows.
> I hope this helps.
> regards,
> Stas K.|||Rajesh,
It looks to me like you want the Page Footer to actually be a Table
Footer. Can you create an additional Table Footer row and insert the
values you have in your Page Footer?
The only problem with this is the footer will always appear immediately
below your Grand Total row instead of at the bottom of the page.
To offset this scenario, you could set the visibility properties for
the Table Footer to hidden and the Page Footer to Visible when there is
data and vice-a-versa when there is no data.
Good luck
Rajesh.jain25@.gmail.com wrote:
> hi Scoredon,
> I tried ur solution but its not working ,still my probelm is not
> resolved.
> my report format is like
> TableHeader
> TableHeader
> Grouping I Fund
> Grouping II Issuer
> Grouping III Security
> <Display Of Information>
> Security/Facility Wise Total
> Issuer Wise Total
> Fund Wise Total
> Grand Total
> Page Footer
> when there is no data then i get the structure like
> TableHeader
> TableHeader
> Grand Total
> <space>
> <space>
> <space>
> <space>
> <space>
> <space>
> <space>
> Page Footer
> I want the structure like this
> TableHeader
> TableHeader
> Grand Total
> Page Footer
> I hope now you have understood my problem completely.plz try to resolve
> this.
> Thx
> Rajesh
> Sorcerdon wrote:
> > Rajesh,
> >
> > How far down is the footer?
> >
> > I see there are 2 solutions in this case:
> > 1) In the properties of a cell there is CanGrow and CanShrink
> > properties.(This is under Layout). Dont forget, your going to change
> > this for the DETAIL and grouping rows... not the footer one.
> > 2) Also you can set all the middle rows to become Hidden. This can be
> > be done using an expressions that should basically check if data is
> > present ( check on some other row), and then the Visibiliy --> hidden
> > property should be set to true for all those detail/grouping rows.
> >
> > I hope this helps.
> > regards,
> >
> > Stas K.

Reduce the 'Space Allocated' for Transaction Logs in MS SQL 2K

Hey Guys --
We have a database that gets lots of activity, but it's not mission
critical data, so no need for a backup. Problem is the Transaction Log
was set to Automatically Grow, and over a year or so it grew to over 50
gigs and filled-up the HD.
We did a backup and the transaction log is down now, but the Space
Allocated is still sitting at 50 Gigs and locking that space for
transaction logs not leaving any room for anything else. How can I
change this? I went into the DB properties and tried to change the
Space Allocated to like 100 (Megs) but I get the error: The new DBFile
size must be larger than the current size. How can I reduce this
filesize?
Thanks for any insight or ideas.
Sam
You can shrink the file using DBCC SHRINKFILE. See the Books Online for
details.
Also, assuming SQL 2000, change the database recovery model to SIMPLE if you
don't need the transaction log for recovery.

> We have a database that gets lots of activity, but it's not mission
> critical data, so no need for a backup.
Do you mean transaction log backup? If you perform no full database
backups, your only option in the event of database loss is to recreate the
database from script.
Hope this helps.
Dan Guzman
SQL Server MVP
"Alex" <samalex@.gmail.com> wrote in message
news:1134491880.285851.217750@.g44g2000cwa.googlegr oups.com...
> Hey Guys --
> We have a database that gets lots of activity, but it's not mission
> critical data, so no need for a backup. Problem is the Transaction Log
> was set to Automatically Grow, and over a year or so it grew to over 50
> gigs and filled-up the HD.
> We did a backup and the transaction log is down now, but the Space
> Allocated is still sitting at 50 Gigs and locking that space for
> transaction logs not leaving any room for anything else. How can I
> change this? I went into the DB properties and tried to change the
> Space Allocated to like 100 (Megs) but I get the error: The new DBFile
> size must be larger than the current size. How can I reduce this
> filesize?
> Thanks for any insight or ideas.
> Sam
>
|||Shrink that pig !
Backup T-Log
DBCC SHRINKFILE(FIleName,0)
Change Growth Settings...
"Alex" <samalex@.gmail.com> wrote in message
news:1134491880.285851.217750@.g44g2000cwa.googlegr oups.com...
> Hey Guys --
> We have a database that gets lots of activity, but it's not mission
> critical data, so no need for a backup. Problem is the Transaction Log
> was set to Automatically Grow, and over a year or so it grew to over 50
> gigs and filled-up the HD.
> We did a backup and the transaction log is down now, but the Space
> Allocated is still sitting at 50 Gigs and locking that space for
> transaction logs not leaving any room for anything else. How can I
> change this? I went into the DB properties and tried to change the
> Space Allocated to like 100 (Megs) but I get the error: The new DBFile
> size must be larger than the current size. How can I reduce this
> filesize?
> Thanks for any insight or ideas.
> Sam
>

Reduce the 'Space Allocated' for Transaction Logs in MS SQL 2K

Hey Guys --
We have a database that gets lots of activity, but it's not mission
critical data, so no need for a backup. Problem is the Transaction Log
was set to Automatically Grow, and over a year or so it grew to over 50
gigs and filled-up the HD.
We did a backup and the transaction log is down now, but the Space
Allocated is still sitting at 50 Gigs and locking that space for
transaction logs not leaving any room for anything else. How can I
change this? I went into the DB properties and tried to change the
Space Allocated to like 100 (Megs) but I get the error: The new DBFile
size must be larger than the current size. How can I reduce this
filesize?
Thanks for any insight or ideas.
SamYou can shrink the file using DBCC SHRINKFILE. See the Books Online for
details.
Also, assuming SQL 2000, change the database recovery model to SIMPLE if you
don't need the transaction log for recovery.
> We have a database that gets lots of activity, but it's not mission
> critical data, so no need for a backup.
Do you mean transaction log backup? If you perform no full database
backups, your only option in the event of database loss is to recreate the
database from script.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Alex" <samalex@.gmail.com> wrote in message
news:1134491880.285851.217750@.g44g2000cwa.googlegroups.com...
> Hey Guys --
> We have a database that gets lots of activity, but it's not mission
> critical data, so no need for a backup. Problem is the Transaction Log
> was set to Automatically Grow, and over a year or so it grew to over 50
> gigs and filled-up the HD.
> We did a backup and the transaction log is down now, but the Space
> Allocated is still sitting at 50 Gigs and locking that space for
> transaction logs not leaving any room for anything else. How can I
> change this? I went into the DB properties and tried to change the
> Space Allocated to like 100 (Megs) but I get the error: The new DBFile
> size must be larger than the current size. How can I reduce this
> filesize?
> Thanks for any insight or ideas.
> Sam
>|||Shrink that pig !
Backup T-Log
DBCC SHRINKFILE(FIleName,0)
Change Growth Settings...
"Alex" <samalex@.gmail.com> wrote in message
news:1134491880.285851.217750@.g44g2000cwa.googlegroups.com...
> Hey Guys --
> We have a database that gets lots of activity, but it's not mission
> critical data, so no need for a backup. Problem is the Transaction Log
> was set to Automatically Grow, and over a year or so it grew to over 50
> gigs and filled-up the HD.
> We did a backup and the transaction log is down now, but the Space
> Allocated is still sitting at 50 Gigs and locking that space for
> transaction logs not leaving any room for anything else. How can I
> change this? I went into the DB properties and tried to change the
> Space Allocated to like 100 (Megs) but I get the error: The new DBFile
> size must be larger than the current size. How can I reduce this
> filesize?
> Thanks for any insight or ideas.
> Sam
>