Showing posts with label follows. Show all posts
Showing posts with label follows. Show all posts

Monday, March 26, 2012

Refreshing data in the cursor.

Hello everybody,

I wrote a stored procedure for SqlServer 2000 and i am using it for paging purpose.
The procedure is as follows :

set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
ALTER PROCEDURE [dbo].[OLAP_PagedRows]
(
@.SelectFields nVarchar(2000) =NULL,

@.GroupByFields nvarchar(1000) =NULL,

@.BaseTable varchar(100),

@.KeyColumn nvarchar(200)=NULL ,

@.JoinTables varchar(500) =NULL,

@.ConditionalClause varchar(1000) =NULL,

@.Pagesize int = 10,

@.PageNumber int =1,

@.SortExpression nvarchar(200)=NULL,

@.SearchText nvarchar(200)=NULL

)

AS

BEGIN

DECLARE @.SQLSTMT NVarchar(4000)

DECLARE @.SQLSTMT1 NVarchar(4000)

SET @.SQLSTMT1 = ''

--check whether page size is given null or not, if so set to default value

IF @.Pagesize IS NULL OR @.Pagesize = ''

BEGIN

SET @.Pagesize =10

END

--check whether page number is given null or not, if so set to default value

IF @.PageNumber IS NULL OR @.PageNumber = ''

BEGIN

SET @.PageNumber =1

END

--Start constructing the query --

SET @.SQLSTMT = 'SELECT '

SET @.SQLSTMT1 = 'DECLARE @.CountValue INT SELECT @.CountValue = count(*) From '+@.BaseTable

SET @.SQLSTMT = @.SQLSTMT + @.SelectFields + ' FROM '+@.BaseTable

If @.JoinTables Is Not Null

BEGIN

SET @.SQLSTMT = @.SQLSTMT + ' ' +@.JoinTables

SET @.SQLSTMT1 = @.SQLSTMT1 + ' ' +@.JoinTables

END

DECLARE @.StmtWhereClause nvarchar(500)

SET @.StmtWhereClause =''

Get where conditional clause

If (@.SearchText Is Not Null AND RTRIM(LTRIM(@.SearchText))<>'')

BEGIN

SET @.StmtWhereClause = @.StmtWhereClause + ' WHERE ' + @.SearchText

END

If @.ConditionalClause Is Not Null AND RTRIM(LTRIM(@.ConditionalClause))<>''

BEGIN

IF (@.StmtWhereClause <> '')

BEGIN

SET @.StmtWhereClause= @.StmtWhereClause + 'AND ' +@.ConditionalClause

END

ELSE

BEGIN

SET @.StmtWhereClause = @.StmtWhereClause + ' WHERE ' + @.ConditionalClause

END

END

SET @.SQLSTMT = @.SQLSTMT + @.StmtWhereClause

SET @.SQLSTMT1 = @.SQLSTMT1 + @.StmtWhereClause

If @.GroupByFields Is Not Null And RTRIM(LTRIM(@.GroupByFields))<>''

BEGIN

SET @.SQLSTMT = @.SQLSTMT + ' Group By ' +@.GroupByFields

SET @.SQLSTMT1 = @.SQLSTMT1 + ' Group By ' +@.GroupByFields

END

IF @.SortExpression Is Not Null AND RTRIM(LTRIM(@.SortExpression))<>''

BEGIN

SET @.SortExpression = LTRIM(RTRIM(' Order By '+ @.SortExpression))

SET @.SQLSTMT = @.SQLSTMT +' '+ @.SortExpression

SET @.SQLSTMT1 = @.SQLSTMT1 +' '+ @.SortExpression

END

SET @.SQLSTMT1= @.SQLSTMT1+' SELECT @.CountValue As MyRows '

--SELECT @.SQLSTMT1

--SELECT @.SQLSTMT

DECLARE @.StartRow INT

SET @.SQLSTMT = ' DECLARE temp_Cursor CURSOR SCROLL FOR '+@.SQLSTMT

EXECUTE SP_EXECUTESQL @.SQLSTMT

Open temp_Cursor

DECLARE @.RowCount INT

SET @.RowCount = 1

SET @.startRow = (@.PageSize * (@.PageNumber-1))+@.RowCount

--SELECT @.startRow as 'Current Row'

WHILE @.RowCount <= @.PageSize

BEGIN

--Select @.StartRow 'as @.StartRow'

FETCH ABSOLUTE @.startRow From temp_Cursor

SET @.RowCount= @.RowCount+1

SET @.StartRow = @.startRow + 1

END

deallocate temp_Cursor

EXECUTE SP_EXECUTESQL @.SQLSTMT1

END

It is working fine but I have problem with this kind of paging. I need to load the whole data into the cursor and i have to fetch records. The problem is that my table's contains more than Half a million records in it. If I have to load each time this cursor it will be a very big problem on the server side.

Probably it may not be a best solution, but sqlserver 2000 cannot provide more help than this. If I use sub-query for this like using Top <Number> it adversly effecting the nature of the data retrieval.

One solution that I am thinking is Load cursor once and whenever some updations performed on those tables from which cursor is getting data should be automatically reflect the changes.

Is this possible? Please help me.

Regards

Andy Rogers

hi Andy Rogers
why you use cursor with more data?
please try do'nt use cursor.
you can use temp table for sorting data.
if your data wholud'nt changes, you can use static data.
order on the 0.5 millions records has overloading on the sql server 2000.
sort your data and then use that with SELECT TOP X for best performance.
good luck

|||Because there is no "BOTTOM" command in the select, this is very tricky to do. This is how I have done it in the past.

I modified your code a little. They way you were doing it, returns 1 record set with 1 record for every record. This method returns all records in 1 recordset.

Basically it does a "SELECT TOP @.startRow+PageSize", so you get the smallest set from top to bottom, into a temp table, then deletes everything before the @.startRow and returns the rest. This is the best method I have found.

DECLARE @.SQLSTMT NVarchar(4000)
DECLARE @.SQLSTMT1 NVarchar(4000)

SET @.SQLSTMT1 = ''

--check whether page size is given null or not, if so set to default value

IF @.Pagesize IS NULL OR @.Pagesize = ''

BEGIN
SET @.Pagesize =10
END

--check whether page number is given null or not, if so set to default value

IF @.PageNumber IS NULL OR @.PageNumber = ''
BEGIN
SET @.PageNumber =1
END

DECLARE @.StartRow INT
SET @.startRow = (@.PageSize * (@.PageNumber-1))+ 1

--Start constructing the query --

SET @.SQLSTMT1 = 'DECLARE @.CountValue INT SELECT @.CountValue = count(*) From '+@.BaseTable

SET @.SQLSTMT = 'SELECT TOP ' + CAST(@.startRow+@.PageSize AS VARCHAR(10)) + ' IntRowNum = IDENTITY(int,1,1), '

SET @.SQLSTMT = @.SQLSTMT + @.SelectFields + ' INTO #temptable FROM '+@.BaseTable

If @.JoinTables Is Not Null
BEGIN
SET @.SQLSTMT = @.SQLSTMT + ' ' +@.JoinTables
SET @.SQLSTMT1 = @.SQLSTMT1 + ' ' +@.JoinTables
END

DECLARE @.StmtWhereClause nvarchar(500)

SET @.StmtWhereClause =''

Get where conditional clause

If (@.SearchText Is Not Null AND RTRIM(LTRIM(@.SearchText))<>'')

BEGIN
SET @.StmtWhereClause = @.StmtWhereClause + ' WHERE ' + @.SearchText
END

If @.ConditionalClause Is Not Null AND RTRIM(LTRIM(@.ConditionalClause))<>''
BEGIN
IF (@.StmtWhereClause <> '')
BEGIN
SET @.StmtWhereClause= @.StmtWhereClause + 'AND ' +@.ConditionalClause
END
ELSE
BEGIN
SET @.StmtWhereClause = @.StmtWhereClause + ' WHERE ' + @.ConditionalClause

END
END

SET @.SQLSTMT = @.SQLSTMT + @.StmtWhereClause

SET @.SQLSTMT1 = @.SQLSTMT1 + @.StmtWhereClause

If @.GroupByFields Is Not Null And RTRIM(LTRIM(@.GroupByFields))<>''
BEGIN
SET @.SQLSTMT = @.SQLSTMT + ' Group By ' +@.GroupByFields
SET @.SQLSTMT1 = @.SQLSTMT1 + ' Group By ' +@.GroupByFields
END

IF @.SortExpression Is Not Null AND RTRIM(LTRIM(@.SortExpression))<>''
BEGIN
SET @.SortExpression = LTRIM(RTRIM(' Order By '+ @.SortExpression))
SET @.SQLSTMT = @.SQLSTMT +' '+ @.SortExpression
SET @.SQLSTMT1 = @.SQLSTMT1 +' '+ @.SortExpression
END

SET @.SQLSTMT = @.SQLSTMT + ' DELETE FROM #temptable WHERE IntRowNum < ' + CAST(@.startRow AS VARCHAR(10)) + ' SELECT * FROM #temptable DROP TABLE #temptable '

SET @.SQLSTMT1= @.SQLSTMT1+' SELECT @.CountValue As MyRows '

--SELECT @.SQLSTMT1
--SELECT @.SQLSTMT

EXECUTE SP_EXECUTESQL @.SQLSTMT

EXECUTE SP_EXECUTESQL @.SQLSTMT1

Wednesday, March 7, 2012

Reducing the size of the Transaction Log.

We have a SQL Server 2000 SP3 database with recovery model set to full. I
set up 3 automatic backup jobs as follows:
Each day,Mon thru Fri at 9:00 am
1 step - full backup of database to file A with overwrite
Each day,Mon thru Fri every 30 minutes,between 9:30 am and 5:00 pm
1 step - transaction log backup of database to file A with append
Each day,Mon thru Fri at 5:05 pm
1st step - transaction log backup of database to file A with append
2nd step - execute BACKUP LOG WITH NO_LOG
3rd step - execute DBCC SHRINK (db_file_name,1)
First I tried to shrink the Log file without issuing the "BACKUP LOG WITH
NO_LOG" statement , but DBCC SHRINK doesn't shrink the file as I want, even
though I back up the Log.
When I use BACKUP LOG WITH NO_LOG everything seems to be working OK.
However, I get an error in the Event Viever(Application Log) stating
"Database Log truncated".
My question is:
Should I worry about this error ?
Is any data loss likely will this arrangement ?
After all, I truncate the Transaction Log on purpose, right after backing it
back in the previous step.
Regards,
SezginYou should not use the no_log command if you require point in time recovery,
and if you do not require point in time recovery, set your recovery mode to
simple. I would advise not to be shrinking your log or data file like you
are as if the db or log has to autogrow it costs you at the point it needs
to allocate more space and that is typically when your db is busy.
HTH
--
Ray Higdon MCSE, MCDBA, CCNA
--
"Sezgin Rafet" <srafet@.hotmail.com> wrote in message
news:OUaTekvxDHA.2520@.TK2MSFTNGP10.phx.gbl...
> We have a SQL Server 2000 SP3 database with recovery model set to full. I
> set up 3 automatic backup jobs as follows:
> Each day,Mon thru Fri at 9:00 am
> 1 step - full backup of database to file A with overwrite
> Each day,Mon thru Fri every 30 minutes,between 9:30 am and 5:00 pm
> 1 step - transaction log backup of database to file A with append
> Each day,Mon thru Fri at 5:05 pm
> 1st step - transaction log backup of database to file A with append
> 2nd step - execute BACKUP LOG WITH NO_LOG
> 3rd step - execute DBCC SHRINK (db_file_name,1)
> First I tried to shrink the Log file without issuing the "BACKUP LOG WITH
> NO_LOG" statement , but DBCC SHRINK doesn't shrink the file as I want,
even
> though I back up the Log.
>
> When I use BACKUP LOG WITH NO_LOG everything seems to be working OK.
> However, I get an error in the Event Viever(Application Log) stating
> "Database Log truncated".
> My question is:
> Should I worry about this error ?
> Is any data loss likely will this arrangement ?
> After all, I truncate the Transaction Log on purpose, right after backing
it
> back in the previous step.
> Regards,
> Sezgin
>
>|||Thanks for your reply Ray,
The database has a high number of transactions every day. I have to make
sure any possible data loss is minimized. That is why I used full recovery
model. During tests, in about 5 weeks time, the Transaction Log grew up to
nearly 7 GB (the data file currently is just over 1 GB). I tried my best to
shrink the Transaction Log without using the "Backup Log with no_log"
command, but to no success. Could you offer any advice or alternative
solution.
Best regards,
Sezgin Rafet
"Ray Higdon" <rayhigdon@.higdonconsulting.com> wrote in message
news:%23L6$G5vxDHA.1740@.TK2MSFTNGP09.phx.gbl...
> You should not use the no_log command if you require point in time
recovery,
> and if you do not require point in time recovery, set your recovery mode
to
> simple. I would advise not to be shrinking your log or data file like you
> are as if the db or log has to autogrow it costs you at the point it needs
> to allocate more space and that is typically when your db is busy.
> HTH
> --
> Ray Higdon MCSE, MCDBA, CCNA
> --
> "Sezgin Rafet" <srafet@.hotmail.com> wrote in message
> news:OUaTekvxDHA.2520@.TK2MSFTNGP10.phx.gbl...
> > We have a SQL Server 2000 SP3 database with recovery model set to full.
I
> > set up 3 automatic backup jobs as follows:
> >
> > Each day,Mon thru Fri at 9:00 am
> > 1 step - full backup of database to file A with overwrite
> >
> > Each day,Mon thru Fri every 30 minutes,between 9:30 am and 5:00 pm
> > 1 step - transaction log backup of database to file A with append
> >
> > Each day,Mon thru Fri at 5:05 pm
> > 1st step - transaction log backup of database to file A with append
> > 2nd step - execute BACKUP LOG WITH NO_LOG
> > 3rd step - execute DBCC SHRINK (db_file_name,1)
> >
> > First I tried to shrink the Log file without issuing the "BACKUP LOG
WITH
> > NO_LOG" statement , but DBCC SHRINK doesn't shrink the file as I want,
> even
> > though I back up the Log.
> >
> >
> > When I use BACKUP LOG WITH NO_LOG everything seems to be working OK.
> > However, I get an error in the Event Viever(Application Log) stating
> > "Database Log truncated".
> >
> > My question is:
> > Should I worry about this error ?
> > Is any data loss likely will this arrangement ?
> >
> > After all, I truncate the Transaction Log on purpose, right after
backing
> it
> > back in the previous step.
> >
> > Regards,
> >
> > Sezgin
> >
> >
> >
> >
>|||THe log should eventually shrink, and you should NOT be using backup log
with no_log...
I noticed however that you said you were using dbcc shrink...
The proper command is dbcc shrinkfile, or dbcc shrikdatabase... Could that
be part of the problem?
--
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Sezgin Rafet" <srafet@.hotmail.com> wrote in message
news:uvz5lIGyDHA.2308@.TK2MSFTNGP11.phx.gbl...
> Thanks for your reply Ray,
>
> The database has a high number of transactions every day. I have to make
> sure any possible data loss is minimized. That is why I used full recovery
> model. During tests, in about 5 weeks time, the Transaction Log grew up to
> nearly 7 GB (the data file currently is just over 1 GB). I tried my best
to
> shrink the Transaction Log without using the "Backup Log with no_log"
> command, but to no success. Could you offer any advice or alternative
> solution.
>
> Best regards,
>
> Sezgin Rafet
>
> "Ray Higdon" <rayhigdon@.higdonconsulting.com> wrote in message
> news:%23L6$G5vxDHA.1740@.TK2MSFTNGP09.phx.gbl...
> > You should not use the no_log command if you require point in time
> recovery,
> > and if you do not require point in time recovery, set your recovery mode
> to
> > simple. I would advise not to be shrinking your log or data file like
you
> > are as if the db or log has to autogrow it costs you at the point it
needs
> > to allocate more space and that is typically when your db is busy.
> >
> > HTH
> >
> > --
> > Ray Higdon MCSE, MCDBA, CCNA
> > --
> > "Sezgin Rafet" <srafet@.hotmail.com> wrote in message
> > news:OUaTekvxDHA.2520@.TK2MSFTNGP10.phx.gbl...
> > > We have a SQL Server 2000 SP3 database with recovery model set to
full.
> I
> > > set up 3 automatic backup jobs as follows:
> > >
> > > Each day,Mon thru Fri at 9:00 am
> > > 1 step - full backup of database to file A with overwrite
> > >
> > > Each day,Mon thru Fri every 30 minutes,between 9:30 am and 5:00 pm
> > > 1 step - transaction log backup of database to file A with append
> > >
> > > Each day,Mon thru Fri at 5:05 pm
> > > 1st step - transaction log backup of database to file A with append
> > > 2nd step - execute BACKUP LOG WITH NO_LOG
> > > 3rd step - execute DBCC SHRINK (db_file_name,1)
> > >
> > > First I tried to shrink the Log file without issuing the "BACKUP LOG
> WITH
> > > NO_LOG" statement , but DBCC SHRINK doesn't shrink the file as I want,
> > even
> > > though I back up the Log.
> > >
> > >
> > > When I use BACKUP LOG WITH NO_LOG everything seems to be working OK.
> > > However, I get an error in the Event Viever(Application Log) stating
> > > "Database Log truncated".
> > >
> > > My question is:
> > > Should I worry about this error ?
> > > Is any data loss likely will this arrangement ?
> > >
> > > After all, I truncate the Transaction Log on purpose, right after
> backing
> > it
> > > back in the previous step.
> > >
> > > Regards,
> > >
> > > Sezgin
> > >
> > >
> > >
> > >
> >
> >
>|||Thanks Wayne,
I used DBCC SHRINKFILE(db_log_file_name,1) , just have mistyped it in the
post.
Sezgin Rafet
"Wayne Snyder" <wsnyder@.computeredservices.com> wrote in message
news:%23G0pjgIyDHA.2440@.TK2MSFTNGP12.phx.gbl...
> THe log should eventually shrink, and you should NOT be using backup log
> with no_log...
> I noticed however that you said you were using dbcc shrink...
> The proper command is dbcc shrinkfile, or dbcc shrikdatabase... Could that
> be part of the problem?
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Computer Education Services Corporation (CESC), Charlotte, NC
> www.computeredservices.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
>
> "Sezgin Rafet" <srafet@.hotmail.com> wrote in message
> news:uvz5lIGyDHA.2308@.TK2MSFTNGP11.phx.gbl...
> > Thanks for your reply Ray,
> >
> >
> >
> > The database has a high number of transactions every day. I have to make
> > sure any possible data loss is minimized. That is why I used full
recovery
> > model. During tests, in about 5 weeks time, the Transaction Log grew up
to
> > nearly 7 GB (the data file currently is just over 1 GB). I tried my best
> to
> > shrink the Transaction Log without using the "Backup Log with no_log"
> > command, but to no success. Could you offer any advice or alternative
> > solution.
> >
> >
> >
> > Best regards,
> >
> >
> >
> > Sezgin Rafet
> >
> >
> > "Ray Higdon" <rayhigdon@.higdonconsulting.com> wrote in message
> > news:%23L6$G5vxDHA.1740@.TK2MSFTNGP09.phx.gbl...
> > > You should not use the no_log command if you require point in time
> > recovery,
> > > and if you do not require point in time recovery, set your recovery
mode
> > to
> > > simple. I would advise not to be shrinking your log or data file like
> you
> > > are as if the db or log has to autogrow it costs you at the point it
> needs
> > > to allocate more space and that is typically when your db is busy.
> > >
> > > HTH
> > >
> > > --
> > > Ray Higdon MCSE, MCDBA, CCNA
> > > --
> > > "Sezgin Rafet" <srafet@.hotmail.com> wrote in message
> > > news:OUaTekvxDHA.2520@.TK2MSFTNGP10.phx.gbl...
> > > > We have a SQL Server 2000 SP3 database with recovery model set to
> full.
> > I
> > > > set up 3 automatic backup jobs as follows:
> > > >
> > > > Each day,Mon thru Fri at 9:00 am
> > > > 1 step - full backup of database to file A with overwrite
> > > >
> > > > Each day,Mon thru Fri every 30 minutes,between 9:30 am and 5:00 pm
> > > > 1 step - transaction log backup of database to file A with append
> > > >
> > > > Each day,Mon thru Fri at 5:05 pm
> > > > 1st step - transaction log backup of database to file A with append
> > > > 2nd step - execute BACKUP LOG WITH NO_LOG
> > > > 3rd step - execute DBCC SHRINK (db_file_name,1)
> > > >
> > > > First I tried to shrink the Log file without issuing the "BACKUP LOG
> > WITH
> > > > NO_LOG" statement , but DBCC SHRINK doesn't shrink the file as I
want,
> > > even
> > > > though I back up the Log.
> > > >
> > > >
> > > > When I use BACKUP LOG WITH NO_LOG everything seems to be working
OK.
> > > > However, I get an error in the Event Viever(Application Log) stating
> > > > "Database Log truncated".
> > > >
> > > > My question is:
> > > > Should I worry about this error ?
> > > > Is any data loss likely will this arrangement ?
> > > >
> > > > After all, I truncate the Transaction Log on purpose, right after
> > backing
> > > it
> > > > back in the previous step.
> > > >
> > > > Regards,
> > > >
> > > > Sezgin
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> >
> >
>