Friday, March 30, 2012
page break after 10 rows
per page. How can I force this grouping in RS, can I use functions like
Rownumber, and how precisely does it work ?Yes, search this forum, it's been mentioned loads of times in here.
Effectively use the RowNumber function divided by 10 with the Ceiling
function.
--
Regards
Chris
joia wrote:
> I have a report with 1 grouping level, and I want to display only 10
> records per page. How can I force this grouping in RS, can I use
> functions like Rownumber, and how precisely does it work ?|||Hi, the easiest way to do it, as i found out, is using this function:
System.Math.Ceiling (RowNumber (Nothing)/10)
where 10 can be replaced by a report parameter.
Cheers, Johan
"Chris McGuigan" wrote:
> Yes, search this forum, it's been mentioned loads of times in here.
> Effectively use the RowNumber function divided by 10 with the Ceiling
> function.
> --
> Regards
> Chris
> joia wrote:
> > I have a report with 1 grouping level, and I want to display only 10
> > records per page. How can I force this grouping in RS, can I use
> > functions like Rownumber, and how precisely does it work ?
>|||I am new to reporting services. Can you show me how to use this function?
Thanks,
Anna
"joia" wrote:
> Hi, the easiest way to do it, as i found out, is using this function:
> System.Math.Ceiling (RowNumber (Nothing)/10)
> where 10 can be replaced by a report parameter.
> Cheers, Johan
> "Chris McGuigan" wrote:
> > Yes, search this forum, it's been mentioned loads of times in here.
> > Effectively use the RowNumber function divided by 10 with the Ceiling
> > function.
> > --
> > Regards
> > Chris
> >
> > joia wrote:
> >
> > > I have a report with 1 grouping level, and I want to display only 10
> > > records per page. How can I force this grouping in RS, can I use
> > > functions like Rownumber, and how precisely does it work ?
> >|||I figure out how to use this function now.
Thank you for sharing this function.
Anna :)
"Anna" wrote:
> I am new to reporting services. Can you show me how to use this function?
> Thanks,
> Anna
> "joia" wrote:
> > Hi, the easiest way to do it, as i found out, is using this function:
> > System.Math.Ceiling (RowNumber (Nothing)/10)
> >
> > where 10 can be replaced by a report parameter.
> >
> > Cheers, Johan
> >
> > "Chris McGuigan" wrote:
> >
> > > Yes, search this forum, it's been mentioned loads of times in here.
> > > Effectively use the RowNumber function divided by 10 with the Ceiling
> > > function.
> > > --
> > > Regards
> > > Chris
> > >
> > > joia wrote:
> > >
> > > > I have a report with 1 grouping level, and I want to display only 10
> > > > records per page. How can I force this grouping in RS, can I use
> > > > functions like Rownumber, and how precisely does it work ?
> > >
Wednesday, March 28, 2012
PAD_INDEX : activating or not ?
On SQL Server 7 SP4.
3 tables with about 100 000 rows each one, frequently updated.
1 clustered index on primary key, and 2 non clustered index on columns used by JOIN and by queries.
Fillfactor for index : 90.
PAD_INDEX : not activated.
I have some deadlocks problems, probably due to index pages and key locking (as precised in error log). Is it possible that the cause of that is splitting index pages while UPDATE ?
May I activate PAD_INDEX to solve part of this problem ?
Thanks for any helpIt's a performance issue thing, I don't think that's your problem
http://www.sql-server-performance.com/oltp_performance.asp|||Well, the nature of deadlocks has NOTHING to do with performance, thus cannot be fixed by recommendations given in the link. The only applicable advice which is not even in the section implied is to keep your transactions short. This will minimize the possibility of deadlocks, but will not eliminate them. As you all know, it deals with object access order, and should be addressed accordingly, by examining all possible transactions involving the same set of objects in reverse order.|||Thanks all, I'm going to have a look at it.|||pad index is really not neccesary. i have chased this guy down many roads
i even had extended conversations with kimberly tripp and kalen delaney on this and everyone i have talked to has told me that pad index is just not gonna provide any real advantage.
Wednesday, March 21, 2012
Package Execution takes 100% CPU
Hi all,
I have a package which updates a table which contains a large amount of rows, when this package is runnning the process consumes 100% CPU and other services are crashing (SSAS, SPS, SSRS).
Is there any parameter which could be set to avoid SSIS to consume 100% of the CPU during the execution of this package ?
Try to redefine your SSIS package...
Regards!
|||Is SSIS taking up the processor, or is it the relational engine? I assume it is the latter.First off, if SQL Server is sharing the box with other processes, you need to limit the max amount of memory it can take. Otherwise, it is likely to hog it all and starve the other processes.
Second, if you have a lot of rows to update you should bulk insert the changes to a staging table and then run a single UPDATE statement joined between the staging and the target table. Using the OLE DB Command, SSIS can only update a single row at a time, which is very inefficient.
Finally, for a large table you need to be very careful about your indexes. A table with a lot of indexes will be much slower to update than one without. Ideally, you only want one index on this table, the one that will allow you to find the row to update. Obviously, you'll need to balance this with the intended use of the table.
|||Thanks for your answer.
Monday, March 12, 2012
Package cancelled
Hi,
I'm trying to transfer data from a DB2 database to a MS SQL Server database (approx. 150 000 rows).
When I execute the package, the process is always cancelled (In output window: SSIS package "DB2_test_TranAddl.dtsx" finished: Canceled.)
I use a view on DB2 to provide the data and I can modify it to return only a few rows. To achieve this, I use a WHERE clause: WHERE OUTLET_NO Between X And Y.
When I try OUTLET_NO Between 1 And 5 (255 rows) the package executes successfully.
When I try OUTLET_NO Between 5 And 10 (388 rows) it works.
When I try OUTLET_NO Between 1 And 10 (643 rows) it fails!!
Any idea?
How fast does each of the queries run (1-5 and 5-10)? Suppose the connector is timing out?|||
Phil Brammer wrote:
How fast does each of the queries run (1-5 and 5-10)? Suppose the connector is timing out?
A few seconds (less than 5)
The timeouts are set to 0.
I tried changing them to 60 seconds but it doesn't work...
|||OK here's an update:
I was using the IBM driver, it is not functionnal.
I installed the Microsoft OLE DB provider for DB2 and (after spending like half an hour trying to configure the connection) it works. 150 000 rows transferred successfully in a little more than 8 minutes. Takes about 1min30sec. with the DTS version so I guess I have to do some tweaking...
EDIT: I use the FastLoad on the destination. Takes approx 1min30sec.
|||Fleo,
I have the same performance issue you mentioned. 300,000 rows takes only 3 minutes with the ODBC connection in DTS, but more than 5 minutes with the microsoft OLD DB provider for DB2 in SSIS. Have you figured out any tweaking that may help speed up the transfer?
Thanks.
|||Any reason you can't use an ODBC connection in SSIS?
-Jamie
|||I need to load data from SQL2005 to AS400 and there is no ADO.NET/ODBC destination. Also, the unicode issue.|||
Jane2006 wrote:
I need to load data from SQL2005 to AS400 and there is no ADO.NET/ODBC destination. Also, the unicode issue.
Right, well, let's keep your issue to the thread you started: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1723508&SiteID=1
You can use the OLE DB Destination to write to AS400.
Package cancelled
Hi,
I'm trying to transfer data from a DB2 database to a MS SQL Server database (approx. 150 000 rows).
When I execute the package, the process is always cancelled (In output window: SSIS package "DB2_test_TranAddl.dtsx" finished: Canceled.)
I use a view on DB2 to provide the data and I can modify it to return only a few rows. To achieve this, I use a WHERE clause: WHERE OUTLET_NO Between X And Y.
When I try OUTLET_NO Between 1 And 5 (255 rows) the package executes successfully.
When I try OUTLET_NO Between 5 And 10 (388 rows) it works.
When I try OUTLET_NO Between 1 And 10 (643 rows) it fails!!
Any idea?
How fast does each of the queries run (1-5 and 5-10)? Suppose the connector is timing out?|||
Phil Brammer wrote:
How fast does each of the queries run (1-5 and 5-10)? Suppose the connector is timing out?
A few seconds (less than 5)
The timeouts are set to 0.
I tried changing them to 60 seconds but it doesn't work...
|||OK here's an update:
I was using the IBM driver, it is not functionnal.
I installed the Microsoft OLE DB provider for DB2 and (after spending like half an hour trying to configure the connection) it works. 150 000 rows transferred successfully in a little more than 8 minutes. Takes about 1min30sec. with the DTS version so I guess I have to do some tweaking...
EDIT: I use the FastLoad on the destination. Takes approx 1min30sec.
|||Fleo,
I have the same performance issue you mentioned. 300,000 rows takes only 3 minutes with the ODBC connection in DTS, but more than 5 minutes with the microsoft OLD DB provider for DB2 in SSIS. Have you figured out any tweaking that may help speed up the transfer?
Thanks.
|||Any reason you can't use an ODBC connection in SSIS?
-Jamie
|||I need to load data from SQL2005 to AS400 and there is no ADO.NET/ODBC destination. Also, the unicode issue.|||
Jane2006 wrote:
I need to load data from SQL2005 to AS400 and there is no ADO.NET/ODBC destination. Also, the unicode issue.
Right, well, let's keep your issue to the thread you started: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1723508&SiteID=1
You can use the OLE DB Destination to write to AS400.
p2p replication error too many arguments specified
server2).
I added a few columns using sp_repladdcolumn stored procedure. Rows inserted
at Server1
are replicated to Server2 without problems, and rows inserted Server2 can
not be replicated to Server1
because of the error below. Any help is appreciated. thanks, D
Command attempted:
if @.@.trancount > 0 rollback tran
(Transaction sequence number: 0x0000008600000EE7000400000000, Command ID: 1)
Error messages:
Procedure or function sp_MSins_dboCustomers0823818373 has too many arguments
specified. (Source: MSSQLServer, Error number: 8144)
Get help: http://help/8144
Procedure or function sp_MSins_dboCustomers0823818373 has too many arguments
specified. (Source: MSSQLServer, Error number: 8144)
Get help: http://help/8144
Paul,
First time at this. Not sure how to quiesce the system. Does
this mean change db to single user?
I've used sp_repladdcolumn to add two columns to the publisher's article.
Both tables at subscriber and publisher do reflect the new columns.
However, stored procedures at the publisher do not receive the changes
while stored procedures at subscriber do.
When a new record is inserted into the table at publisher, the
new record does get distributed. When a new record is inserted
at the subscriber, the record got an error when being distributed.
Thanks, D
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:OwyqlhJwHHA.4612@.TK2MSFTNGP04.phx.gbl...
> 1.. Quiesce the system.
> 2.. Execute the data definition language (DDL) statements to modify the
> schema of published tables. For more information about supported schema
> changes, see Making Schema Changes on Publication Databases.
> 3.. Before resuming activity on published tables, quiesce the system
> again. This ensures that schema changes have been received by all nodes
> before any new data changes are replicated
> The general advice is to do things as above - is that how you made the
> schema change?
> I'm also interested in what has actually happened - do the tables have an
> equal number of columns now? Can you also compare the text of the 2
> sp_MSins_dboCustomers procs (pub and subs) and see if they are different.
> If so, can you tell us which one is correct.
> Cheers,
> Paul Ibison
>
Friday, March 9, 2012
ownership chains, users and views
er. Say we have two salesmen Fred and Barney, we wanted to create 2 views:
CREATE VIEW Fred.SALES_LEADS AS SELECT * FROM dbo.ALL_SALES_LEADS WHERE SALE
S_PERSON='Fred'
and
CREATE VIEW Barney.SALES_LEADS AS SELECT * FROM dbo.ALL_SALES_LEADS WHERE SA
LES_PERSON='Barney'
Neither salesman should be able to select from ALL_SALES_LEADS and each, bei
ng amazingly SQL savvy salesmen, can log on to sql and execute SELECT * FROM
SALES_LEADS WHERE STATUS='New'.
Problem is that neither because neither Fred nor Barney owns the ALL_SALES_L
EADS table, they can't select off the view with any combination of GRANT/DEN
Y/REVOKE statements that we can find because the ownership chains are broken
.
Our best fallback is to name the views differently but leave them owned by d
bo. Not as neat and tidy. ie
CREATE VIEW dbo.Fred_SALES_LEADS AS SELECT * FROM dbo.ALL_SALES_LEADS WHERE
SALES_PERSON='Fred'
GRANT SELECT ON dbo.Fred_SALES_LEADS TO Fred
Is there any way around our original problem, SQL gurus? I think this would
be a very good use of user-owned views in a number of realistic scenarios.> Our best fallback is to name the views differently but leave them owned by
dbo.
The table and view need to have the same owner in order to use views as a
security mechanism. If each salesperson accesses the database with their
own userid, you might consider filtering using database userid instead of a
hard-coded constant. This way, you only need one view. For example:
CREATE VIEW dbo.SALES_LEADS AS
SELECT *
FROM dbo.ALL_SALES_LEADS
WHERE SALES_PERSON = CURRENT_USER
GO
GRANT SELECT ON dbo.SALES_LEADS TO SalesRole
GO
Hope this helps.
Dan Guzman
SQL Server MVP
"Dave Cattermole" <anonymous@.discussions.microsoft.com> wrote in message
news:726B0928-E93C-4233-A5FE-051E8ADA5684@.microsoft.com...
> We have a design using views to partition visibility of table rows by sql
user. Say we have two salesmen Fred and Barney, we wanted to create 2 views:
> CREATE VIEW Fred.SALES_LEADS AS SELECT * FROM dbo.ALL_SALES_LEADS WHERE
SALES_PERSON='Fred'
> and
> CREATE VIEW Barney.SALES_LEADS AS SELECT * FROM dbo.ALL_SALES_LEADS WHERE
SALES_PERSON='Barney'
> Neither salesman should be able to select from ALL_SALES_LEADS and each,
being amazingly SQL savvy salesmen, can log on to sql and execute SELECT *
FROM SALES_LEADS WHERE STATUS='New'.
> Problem is that neither because neither Fred nor Barney owns the
ALL_SALES_LEADS table, they can't select off the view with any combination
of GRANT/DENY/REVOKE statements that we can find because the ownership
chains are broken .
> Our best fallback is to name the views differently but leave them owned by
dbo. Not as neat and tidy. ie
> CREATE VIEW dbo.Fred_SALES_LEADS AS SELECT * FROM dbo.ALL_SALES_LEADS
WHERE SALES_PERSON='Fred'
> GRANT SELECT ON dbo.Fred_SALES_LEADS TO Fred
> Is there any way around our original problem, SQL gurus? I think this
would be a very good use of user-owned views in a number of realistic
scenarios.
Saturday, February 25, 2012
Overwrite Excel spreadsheet
I have a DTS package that imports a file with data,
performs several checks,
puts all invalid rows in a table,
exports the table to a excel spreadsheet.
My problem is that it appends the data every time. I want it to overwrite the existing spreadsheet. I can't find any option for that.
I use the "Transform Data Task" to do this.
Any suggestions?Do an EXECUTE PROCESS Task and supply a DELETE Command...
But I didn't think it did an append...|||I had the same problem this week. I made a template of the excel sheet, selected the columns needed and created a named range on the excel sheet, and added it as a linked server. Once it's linked like this you can treat the named range like a table and use a delete from statement to clear the sheet.
Run this command from QA to make the excel sheet a linked server, 'excelsource' can be any name you want to give the connection.
sp_addlinkedserver 'excelsource', 'jet 4.0', 'Microsoft.Jet.OLEDB.4.0',
'c:\temp\filename.xls', null, 'excel 5.0'
If the named range on the excel sheet is also named excelsource use
delete from excelsource...excelsource
overview query
I am able to do a query to get the table names:
SELECT name FROM dbo.sysobjects WHERE type='U' and name !='dtproperties' ORDER BY name
This shows a listing sorta like:
table1
table2
table3
table4
I would like to do a
SELECT COUNT(*) table1
SELECT COUNT(*) table2
SELECT COUNT(*) table3
SELECT COUNT(*) table4
so that my output looks like
table1 13
table2 47
table3 0
table4 93
anyone know a way i can do this in 1 sql query, or know a way to use multiple sql statements with an asp.net datagrid?don't know of a single select to handel your request but could yo ucall a stored procedure from your ASP page?
create procedure TableRowCount
as
declare @.TableName sysname
create table #Tmp (TableName sysname, [RowCount] int)
select @.TableName = min([name]) from sysobjects WHERE type='U' and [name] !='dtproperties'
while (@.TableName is not null) begin
insert into #Tmp
exec('select ''' + @.TableName + ''', count(*) from ' + @.TableName)
select @.TableName = min([name]) from sysobjects WHERE type='U' and [name] !='dtproperties' and [name] > @.TableName
end
select cast(TableName as varchar) as TableName, [RowCount] From #Tmp
return 0
go
exec TableRowCount
go|||thanks! you're a genius. now i just have to figure out what all that means. i'm not used to stored procedures, but that gives the output i need, in the format i need it.
Thanks again!|||Basicaly, I created a temporary table to hold the results of counting the number of records in a table, then stepped through the sysobjects table looking for records that matched your original where clause. Once a table was found I did the count and stuffed the result set into the temp table. When there were no more table to count I returned the result set. All of this is basic stuff and Books Online will answer any questions you have.
I should have put comments in the code!|||na, it's fine how it was, because I understood most of it, and i'm just a sql beginner.
I just didn't understand the #Tmp being a temporary table, since i'd never seen those, but that's the idea i was going to try, but keeping track of the table was going to be a pain.
Hopefully now I'll be able to finish my application :)|||would #Tmp be what i'm hearing be referred to as a cursor? My boss does a lot of programming, and suggested that i make a cursor, but i had never heard that terminology.|||No #Tmp is a sesion temporary table. Once you log off the table is destroyed.
You can read up on cursors in Microsofts's Books Online. The have their uses but are VERY resource intensive.
The solution I posted is an alternative to a cursor.|||ok cool
you bet i'll be reading those ;) I'd really eventually like to do some more sql programming, it's kinda just going along with my asp.net stuff right now, but that's next on the list, since data is important to all applications ;)
thanks again paul