Wednesday, March 28, 2012
Packaging reports as a product?
views, and installs them into a DB. I also have a series of reports in
Reporting Services that pull data from those views.
Its easy enough for me to distribute the .exe for the app that generates SPs
and views, but how do I distribute the reports without giving the installer
access to them? Basically, I dont want anyone to poke around through the
formulas, etc.
Is there a way to do this?Henry,
Even if you secure the installation process the administrator can always see
the report definition in the Report Manager. So, I would rather gravitate
toward moving the sensitive information in an external .NET assembly and
possibly obfuscating it.
--
Hope this helps.
----
Teo Lachev, MVP [SQL Server], MCSD, MCT
Author: "Microsoft Reporting Services in Action"
Publisher website: http://www.manning.com/lachev
Buy it from Amazon.com: http://shrinkster.com/eq
Home page and blog: http://www.prologika.com/
----
"Henry Zektser" <Henry Zektser@.discussions.microsoft.com> wrote in message
news:6C861463-3CD3-40BB-80B2-A24D2A443923@.microsoft.com...
> I've written an application that generates a couple of stored procs and
> views, and installs them into a DB. I also have a series of reports in
> Reporting Services that pull data from those views.
> Its easy enough for me to distribute the .exe for the app that generates
SPs
> and views, but how do I distribute the reports without giving the
installer
> access to them? Basically, I dont want anyone to poke around through the
> formulas, etc.
> Is there a way to do this?|||Youre suggesting an assembly that spits out a dataset, and reporting on that?
Are there any examples of this around?
"Teo Lachev [MVP]" wrote:
> Henry,
> Even if you secure the installation process the administrator can always see
> the report definition in the Report Manager. So, I would rather gravitate
> toward moving the sensitive information in an external .NET assembly and
> possibly obfuscating it.
> --
> Hope this helps.
> ----
> Teo Lachev, MVP [SQL Server], MCSD, MCT
> Author: "Microsoft Reporting Services in Action"
> Publisher website: http://www.manning.com/lachev
> Buy it from Amazon.com: http://shrinkster.com/eq
> Home page and blog: http://www.prologika.com/
> ----
> "Henry Zektser" <Henry Zektser@.discussions.microsoft.com> wrote in message
> news:6C861463-3CD3-40BB-80B2-A24D2A443923@.microsoft.com...
> > I've written an application that generates a couple of stored procs and
> > views, and installs them into a DB. I also have a series of reports in
> > Reporting Services that pull data from those views.
> >
> > Its easy enough for me to distribute the .exe for the app that generates
> SPs
> > and views, but how do I distribute the reports without giving the
> installer
> > access to them? Basically, I dont want anyone to poke around through the
> > formulas, etc.
> >
> > Is there a way to do this?
>
>|||Well, depends on what you want to protect. You mentioned that you want to
protect the formulas so I imagined that you have some sensitive information
embedded in your formulas that you don't want to reveal.
What exactly do you want to protect?
--
Hope this helps.
----
Teo Lachev, MVP [SQL Server], MCSD, MCT
Author: "Microsoft Reporting Services in Action"
Publisher website: http://www.manning.com/lachev
Buy it from Amazon.com: http://shrinkster.com/eq
Home page and blog: http://www.prologika.com/
----
"Henry Zektser" <HenryZektser@.discussions.microsoft.com> wrote in message
news:D95E5500-FB7D-4596-A92B-09E4A6F58981@.microsoft.com...
> Youre suggesting an assembly that spits out a dataset, and reporting on
that?
> Are there any examples of this around?
> "Teo Lachev [MVP]" wrote:
> > Henry,
> >
> > Even if you secure the installation process the administrator can always
see
> > the report definition in the Report Manager. So, I would rather
gravitate
> > toward moving the sensitive information in an external .NET assembly and
> > possibly obfuscating it.
> >
> > --
> > Hope this helps.
> >
> > ----
> > Teo Lachev, MVP [SQL Server], MCSD, MCT
> > Author: "Microsoft Reporting Services in Action"
> > Publisher website: http://www.manning.com/lachev
> > Buy it from Amazon.com: http://shrinkster.com/eq
> > Home page and blog: http://www.prologika.com/
> > ----
> >
> > "Henry Zektser" <Henry Zektser@.discussions.microsoft.com> wrote in
message
> > news:6C861463-3CD3-40BB-80B2-A24D2A443923@.microsoft.com...
> > > I've written an application that generates a couple of stored procs
and
> > > views, and installs them into a DB. I also have a series of reports in
> > > Reporting Services that pull data from those views.
> > >
> > > Its easy enough for me to distribute the .exe for the app that
generates
> > SPs
> > > and views, but how do I distribute the reports without giving the
> > installer
> > > access to them? Basically, I dont want anyone to poke around through
the
> > > formulas, etc.
> > >
> > > Is there a way to do this?
> >
> >
> >
Monday, March 26, 2012
Package won''t run from job
I imported my packages to SQL Server 2005.
Under Stored Packages > msdb, if I right click the package name an select "run package", the package runs fine without errors.
However if I put the same package in a job and run the job, the job fails every time and won't tell me why. It just says to check the history. This is what the history says:
Executed as user: myDomain\SQLService. The package execution failed. The step failed.
Seems like some kind of permission failure.
Any ideas, please. This permissions stuff is driving me nuts.
Here are all the permission levels that I am aware of when running a job:
1.) permissions at the package level (conn mgrs, ftp mgrs, etc)
2.) rely on SQL Server storage for roles and access control - what does even mean?
3.) job owner (sa) - how does this differ from the job step owner?
4.) job step > run as [user] (SQL Server Agent Account) - why does this require a separate owner?
5.) package > server > Log on to the server [as user] - what the heck is this?
It's maddening. For example, I have no idea what #5 means - when it (the package) is logging on to the server, WHO is it logging on as if the package is running in a job? The job owner? The job step owner? Or something entirely different?
I've asked other people here if they have any clue how all these permissions interrelate, and frankly no one knows. It's a big mystery..... sort of like the big bang. Don't ask me how I passed my MCDBA. That is another great mystery of the universe.
It's no wonder I can't get this thing to run!
Thank you, error handler, for sending me task-related error messages.
So here's the problem:
The script threw an exception: Could not find a part of the path 'o:\myFolder'.
In one of my packages, I have a script that gets file names from the network drive, o:
My other packages reference the d: drive with no problem. It's the o: drive that's having problems.
So maybe whoever is running the package doesn't have permissions to the o: drive, right?
So, getting back to my first post, who needs the permission to read the o: drive? Probably the step owner?
|||
See this classic KB that explains it:
http://support.microsoft.com/default.aspx/kb/918760
The access to o: should be granted to the user who runs the package - Agent service account or Proxy account if you use it for this job. In your case this is currently myDomain\SQLService (from "Executed as user..." message). Also, if o: is network drive, switch to using UNC path (\\server\path) as drive mapping is user-specific.
|||So it seems the problem has to do with the fact that the SQLService domain account does not have permissions to access o:, or that I need to use a UNC path.
However, I still have some questions about how all the pkg/job permissions interrelate to each other:
1.) permissions at the package level (ole db conn mgrs, ftp mgrs, etc) - the SQLService account does not have anything to do with this, right?
2.) rely on SQL Server storage for roles and access control - what does mean? does this relate to #1 above?
3.) job owner (sa) - how does this differ from the job step owner?
5.) New Job Step > General tab > SQL Server Integration Services Package > Server: Log on to the server [Windows Authentication] - will this be the same login as the job step owner, that is, the SQLServer account?
If anyone could answer any or all of these questions, and #5 especially, I would really appreciate it!
Thanks much
|||
sadie519590 wrote:
1.) permissions at the package level (ole db conn mgrs, ftp mgrs, etc) - the SQLService account does not have anything to do with this, right?
It depends - if your connection managers are using Windows Authentication, then the account SQL Agent is running under does have an impact.
sadie519590 wrote:
2.) rely on SQL Server storage for roles and access control - what does mean? does this relate to #1 above?
Without more context, I'm not sure. My best guess is that is referring to storing packages in SQL Server rather than the file system. If you do that, it can handle access to the packages.
sadie519590 wrote:
5.) New Job Step > General tab > SQL Server Integration Services Package > Server: Log on to the server [Windows Authentication] - will this be the same login as the job step owner, that is, the SQLServer account?
Yes, unless you have set up a proxy.
|||
1.) permissions at the package level (ole db conn mgrs, ftp mgrs, etc) - the SQLService account does not have anything to do with this, right?
It depends - if your connection managers are using Windows Authentication, then the account SQL Agent is running under does have an impact.
That's because the conn mgrs will be using the SQL Agent account, right? (Because the this is job step owner?)
Thanks|||Correct.|||o: is a mapped drive, correct? Mapped drives are a user-session concept. That is, when you log on, you can map drives. However, if I remote into that box and log on as well, I don't get to see your mapped drives. Service accounts, such as those that run SQL Server, SQL Server Agent, etc... don't have any clue what a mapped drive is. As soon as you log off of the box, the mapped drives go away. So yes, if you are going to use network shares, you HAVE to use UNC.|||
sadie519590 wrote:
So it seems the problem has to do with the fact that the SQLService domain account does not have permissions to access o:, or that I need to use a UNC path.
Yes, the unc path did solve the problem.
Thanks all.
Package won't run from job
I imported my packages to SQL Server 2005.
Under Stored Packages > msdb, if I right click the package name an select "run package", the package runs fine without errors.
However if I put the same package in a job and run the job, the job fails every time and won't tell me why. It just says to check the history. This is what the history says:
Executed as user: myDomain\SQLService. The package execution failed. The step failed.
Seems like some kind of permission failure.
Any ideas, please. This permissions stuff is driving me nuts.
Here are all the permission levels that I am aware of when running a job:
1.) permissions at the package level (conn mgrs, ftp mgrs, etc)
2.) rely on SQL Server storage for roles and access control - what does even mean?
3.) job owner (sa) - how does this differ from the job step owner?
4.) job step > run as [user] (SQL Server Agent Account) - why does this require a separate owner?
5.) package > server > Log on to the server [as user] - what the heck is this?
It's maddening. For example, I have no idea what #5 means - when it (the package) is logging on to the server, WHO is it logging on as if the package is running in a job? The job owner? The job step owner? Or something entirely different?
I've asked other people here if they have any clue how all these permissions interrelate, and frankly no one knows. It's a big mystery..... sort of like the big bang. Don't ask me how I passed my MCDBA. That is another great mystery of the universe.
It's no wonder I can't get this thing to run!
Thank you, error handler, for sending me task-related error messages.
So here's the problem:
The script threw an exception: Could not find a part of the path 'o:\myFolder'.
In one of my packages, I have a script that gets file names from the network drive, o:
My other packages reference the d: drive with no problem. It's the o: drive that's having problems.
So maybe whoever is running the package doesn't have permissions to the o: drive, right?
So, getting back to my first post, who needs the permission to read the o: drive? Probably the step owner?
|||
See this classic KB that explains it:
http://support.microsoft.com/default.aspx/kb/918760
The access to o: should be granted to the user who runs the package - Agent service account or Proxy account if you use it for this job. In your case this is currently myDomain\SQLService (from "Executed as user..." message). Also, if o: is network drive, switch to using UNC path (\\server\path) as drive mapping is user-specific.
|||So it seems the problem has to do with the fact that the SQLService domain account does not have permissions to access o:, or that I need to use a UNC path.
However, I still have some questions about how all the pkg/job permissions interrelate to each other:
1.) permissions at the package level (ole db conn mgrs, ftp mgrs, etc) - the SQLService account does not have anything to do with this, right?
2.) rely on SQL Server storage for roles and access control - what does mean? does this relate to #1 above?
3.) job owner (sa) - how does this differ from the job step owner?
5.) New Job Step > General tab > SQL Server Integration Services Package > Server: Log on to the server [Windows Authentication] - will this be the same login as the job step owner, that is, the SQLServer account?
If anyone could answer any or all of these questions, and #5 especially, I would really appreciate it!
Thanks much
|||
sadie519590 wrote:
1.) permissions at the package level (ole db conn mgrs, ftp mgrs, etc) - the SQLService account does not have anything to do with this, right?
It depends - if your connection managers are using Windows Authentication, then the account SQL Agent is running under does have an impact.
sadie519590 wrote:
2.) rely on SQL Server storage for roles and access control - what does mean? does this relate to #1 above?
Without more context, I'm not sure. My best guess is that is referring to storing packages in SQL Server rather than the file system. If you do that, it can handle access to the packages.
sadie519590 wrote:
5.) New Job Step > General tab > SQL Server Integration Services Package > Server: Log on to the server [Windows Authentication] - will this be the same login as the job step owner, that is, the SQLServer account?
Yes, unless you have set up a proxy.
|||
1.) permissions at the package level (ole db conn mgrs, ftp mgrs, etc) - the SQLService account does not have anything to do with this, right?
It depends - if your connection managers are using Windows Authentication, then the account SQL Agent is running under does have an impact.
That's because the conn mgrs will be using the SQL Agent account, right? (Because the this is job step owner?)
Thanks|||Correct.|||o: is a mapped drive, correct? Mapped drives are a user-session concept. That is, when you log on, you can map drives. However, if I remote into that box and log on as well, I don't get to see your mapped drives. Service accounts, such as those that run SQL Server, SQL Server Agent, etc... don't have any clue what a mapped drive is. As soon as you log off of the box, the mapped drives go away. So yes, if you are going to use network shares, you HAVE to use UNC.|||
sadie519590 wrote:
So it seems the problem has to do with the fact that the SQLService domain account does not have permissions to access o:, or that I need to use a UNC path.
Yes, the unc path did solve the problem.
Thanks all.
Tuesday, March 20, 2012
Package Error.
I have created a package which contains execution of a set of SQL Stored Procedures. I scheduled the job to run every morning at 7:00 AM. I am getting the following error:
Executed as user: APD-DEV-CS517\SYSTEM. The package could not be found. The step failed.
What do you think the problem is?
Thanks again.Looks to me that "the package could not be found." Have you ensured the path is correct?|||Yeah I am sure.
Let me make sure about that and wait till tomorrow morning.
Thanks though,
Murthy here
Package configurations stored in the database
If the configurations are stored in the database then does the package pick up the configuration as and when required or it a one time pick when the package is executed.
What I plan to do?
When the package is executed, I want to have a script task at the start of the package that will update the configuration table in the database with the values that the package needs to run with. I mean change the values of the configurations of the tasks which will be executed after the script task.
Thanks for your time.
$wapnil
They are a one time pick up at the beginning of execution. Look at the "Execution Results" of one of your packages.|||
How to look at the execution results?
Thanks,
$wapnil
|||spattewar wrote:
What I plan to do?
When the package is executed, I want to have a script task at the start of the package that will update the configuration table in the database with the values that the package needs to run with. I mean change the values of the configurations of the tasks which will be executed after the script task.
You will not be able to change the table content within the same package that will use it, becasue what Phil has just said.
But, why would you want to change it that way....actually where does that script will get the values from?
Have you looked to the Dtexec SET option to assign values to the package properties instead?
|||spattewar wrote:
How to look at the execution results?
Thanks,
$wapnil
It's a tab in Visual Studio.|||
I think I got it all wrong. Let me try to explain
I have developed a single package and have stored the configuration details in the database in the table SSIS configurations. Now this package does the task of FTP connect, download files and upload into a destination table in the database. Now I want this package
1) to be able to connect to different FTP servers in a sequence one after the other. The FTP connection details are stored in one more table FTP_Details in the database.
2) OR be able to run multiple instance of this package at the same time, each connecting to different FTP servers and downloading files.
Could you kindly provide your inputs.
Thanks for your responses.
$wapnil
|||A simple way may be to have a master package with a ForEach loop container and an Execute package task inside. The ForEach loop would iterate through the rows in FTP_Details table to get FTP connection details, and place it/them into a variable(s). Then the Child package will use Parent variable Package configuration to receive the proper connection details on each iteration.
Notice that this approach will not execute the package in parallel.
|||Rafael's got it covered, but in case you need more information (again, the forum search is your friend), here's a link on this very topic:http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=729225&SiteID=1|||
Thanks Phil and Rafeal. I got the approach on how to do it now.
just one more question.
If I want to run multiple instances of this package at the same time then what should be the best approach.
again, thanks for your time.
$wapnil
|||You can't run the same package multiple times at once UNLESS you create a master package that calls the same package under an Execute Package task multiple times. Even then though, I'm not sure that's a good idea given that the packages will all share the same GUID (because they are the same!).I don't understand why you'd want them to run at the same time, when kicking one package off in a foreach loop would be best. If you must run the same package multiple times then perhaps you'd be better off copying the package (physically) and then changing its GUID.
My only concern is how can you run the same package concurrently with another instance of itself without having one step on the other's toes?|||
I would used a foreach loop if it would have suffice our requirement.
One of the requirement could be that we want to download files from 4 different FTPs at the same time and process the files. does this call for creating four different packages , one for each FTP?
Can I not use the dtsexec utility and run the package multiple time by passing different arguments?
Thanks for your time.
$wapnil
|||spattewar wrote:
I would used a foreach loop if it would have suffice our requirement.
One of the requirement could be that we want to download files from 4 different FTPs at the same time and process the files. does this call for creating four different packages , one for each FTP?
Can I not use the dtsexec utility and run the package multiple time by passing different arguments?
Thanks for your time.
$wapnil
Yes, calling dtexec would work just fine. If you want to make your FTP site list database driven, then no, it won't help. UNLESS, you passed in a key to dtexec or something, and that key matches up to a row in the database.
What we've been talking about here is how you can create one package which will loop through a table to setup the FTP connection to as many sites are listed in the table.
There are numerous ways to do this, of course.|||
Ok understand that.
Then maybe I can save the package configuration into an xml file and the provide that xml file as an input to the dtsexec. This way I can run the same package by passing different configurations files.
What do you say?
Thanks for your time.
$wapnil
|||spattewar wrote:
Ok understand that.
Then maybe I can save the package configuration into an xml file and the provide that xml file as an input to the dtsexec. This way I can run the same package by passing different configurations files.
What do you say?
Thanks for your time.
$wapnil
Sounds okay to me, if that's what you want to do... I still question the need to run them at the same time.... If you're downloading gigs of data, I'm not sure that you'll get any faster results by running them concurrently versus serialized. The data pipe can only move so much data. So, with that said, you may want to look to build a more modular approach by using a database table to drive your connections.
However you decide, I think you've got enough information to make some progress.|||ofcourse I have. Thanks
Package configurations stored in the database
If the configurations are stored in the database then does the package pick up the configuration as and when required or it a one time pick when the package is executed.
What I plan to do?
When the package is executed, I want to have a script task at the start of the package that will update the configuration table in the database with the values that the package needs to run with. I mean change the values of the configurations of the tasks which will be executed after the script task.
Thanks for your time.
$wapnil
They are a one time pick up at the beginning of execution. Look at the "Execution Results" of one of your packages.|||
How to look at the execution results?
Thanks,
$wapnil
|||spattewar wrote:
What I plan to do?
When the package is executed, I want to have a script task at the start of the package that will update the configuration table in the database with the values that the package needs to run with. I mean change the values of the configurations of the tasks which will be executed after the script task.
You will not be able to change the table content within the same package that will use it, becasue what Phil has just said.
But, why would you want to change it that way....actually where does that script will get the values from?
Have you looked to the Dtexec SET option to assign values to the package properties instead?
|||spattewar wrote:
How to look at the execution results?
Thanks,
$wapnil
It's a tab in Visual Studio.|||
I think I got it all wrong. Let me try to explain
I have developed a single package and have stored the configuration details in the database in the table SSIS configurations. Now this package does the task of FTP connect, download files and upload into a destination table in the database. Now I want this package
1) to be able to connect to different FTP servers in a sequence one after the other. The FTP connection details are stored in one more table FTP_Details in the database.
2) OR be able to run multiple instance of this package at the same time, each connecting to different FTP servers and downloading files.
Could you kindly provide your inputs.
Thanks for your responses.
$wapnil
|||A simple way may be to have a master package with a ForEach loop container and an Execute package task inside. The ForEach loop would iterate through the rows in FTP_Details table to get FTP connection details, and place it/them into a variable(s). Then the Child package will use Parent variable Package configuration to receive the proper connection details on each iteration.
Notice that this approach will not execute the package in parallel.
|||Rafael's got it covered, but in case you need more information (again, the forum search is your friend), here's a link on this very topic:http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=729225&SiteID=1|||
Thanks Phil and Rafeal. I got the approach on how to do it now.
just one more question.
If I want to run multiple instances of this package at the same time then what should be the best approach.
again, thanks for your time.
$wapnil
|||You can't run the same package multiple times at once UNLESS you create a master package that calls the same package under an Execute Package task multiple times. Even then though, I'm not sure that's a good idea given that the packages will all share the same GUID (because they are the same!).I don't understand why you'd want them to run at the same time, when kicking one package off in a foreach loop would be best. If you must run the same package multiple times then perhaps you'd be better off copying the package (physically) and then changing its GUID.
My only concern is how can you run the same package concurrently with another instance of itself without having one step on the other's toes?|||
I would used a foreach loop if it would have suffice our requirement.
One of the requirement could be that we want to download files from 4 different FTPs at the same time and process the files. does this call for creating four different packages , one for each FTP?
Can I not use the dtsexec utility and run the package multiple time by passing different arguments?
Thanks for your time.
$wapnil
|||spattewar wrote:
I would used a foreach loop if it would have suffice our requirement.
One of the requirement could be that we want to download files from 4 different FTPs at the same time and process the files. does this call for creating four different packages , one for each FTP?
Can I not use the dtsexec utility and run the package multiple time by passing different arguments?
Thanks for your time.
$wapnil
Yes, calling dtexec would work just fine. If you want to make your FTP site list database driven, then no, it won't help. UNLESS, you passed in a key to dtexec or something, and that key matches up to a row in the database.
What we've been talking about here is how you can create one package which will loop through a table to setup the FTP connection to as many sites are listed in the table.
There are numerous ways to do this, of course.|||
Ok understand that.
Then maybe I can save the package configuration into an xml file and the provide that xml file as an input to the dtsexec. This way I can run the same package by passing different configurations files.
What do you say?
Thanks for your time.
$wapnil
|||spattewar wrote:
Ok understand that.
Then maybe I can save the package configuration into an xml file and the provide that xml file as an input to the dtsexec. This way I can run the same package by passing different configurations files.
What do you say?
Thanks for your time.
$wapnil
Sounds okay to me, if that's what you want to do... I still question the need to run them at the same time.... If you're downloading gigs of data, I'm not sure that you'll get any faster results by running them concurrently versus serialized. The data pipe can only move so much data. So, with that said, you may want to look to build a more modular approach by using a database table to drive your connections.
However you decide, I think you've got enough information to make some progress.|||ofcourse I have. Thanks
Monday, March 12, 2012
p2p replication error too many arguments specified
server2).
I added a few columns using sp_repladdcolumn stored procedure. Rows inserted
at Server1
are replicated to Server2 without problems, and rows inserted Server2 can
not be replicated to Server1
because of the error below. Any help is appreciated. thanks, D
Command attempted:
if @.@.trancount > 0 rollback tran
(Transaction sequence number: 0x0000008600000EE7000400000000, Command ID: 1)
Error messages:
Procedure or function sp_MSins_dboCustomers0823818373 has too many arguments
specified. (Source: MSSQLServer, Error number: 8144)
Get help: http://help/8144
Procedure or function sp_MSins_dboCustomers0823818373 has too many arguments
specified. (Source: MSSQLServer, Error number: 8144)
Get help: http://help/8144
Paul,
First time at this. Not sure how to quiesce the system. Does
this mean change db to single user?
I've used sp_repladdcolumn to add two columns to the publisher's article.
Both tables at subscriber and publisher do reflect the new columns.
However, stored procedures at the publisher do not receive the changes
while stored procedures at subscriber do.
When a new record is inserted into the table at publisher, the
new record does get distributed. When a new record is inserted
at the subscriber, the record got an error when being distributed.
Thanks, D
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:OwyqlhJwHHA.4612@.TK2MSFTNGP04.phx.gbl...
> 1.. Quiesce the system.
> 2.. Execute the data definition language (DDL) statements to modify the
> schema of published tables. For more information about supported schema
> changes, see Making Schema Changes on Publication Databases.
> 3.. Before resuming activity on published tables, quiesce the system
> again. This ensures that schema changes have been received by all nodes
> before any new data changes are replicated
> The general advice is to do things as above - is that how you made the
> schema change?
> I'm also interested in what has actually happened - do the tables have an
> equal number of columns now? Can you also compare the text of the 2
> sp_MSins_dboCustomers procs (pub and subs) and see if they are different.
> If so, can you tell us which one is correct.
> Cheers,
> Paul Ibison
>
Friday, March 9, 2012
Ownership of Stored Procedures/Functions By Role Other Than dbo
(1) Developers are not allowed to create/alter/delete tables owned
by dbo. To prevent this, no developers will be granted role db_owner.
Developers should only be creating/modifying stored procedures/functions.
(2) All tables will be owned by dbo. DBA's (who have role db_owner
and server privileges) will be creating/modifying table definitions. DBA's will also be granting individual table priv's to the developers (most likely through the role "dco" below).
(3) All stored procedures/functions will be owned by a new role "dco"
(database code owner). All developers will be granted role "dco". No tables should be created in "dco", so role "dco" will be DENY-ed the privilege CREATE TABLE. (I'm also thinking about restricting view creation to dbo by DENY-ing CREATE VIEW.) DBA's will implicitly get access to dco procs/funcs from server privileges.
Does anybody see any possible problems with this approach? Have you tried anything like this? I've read about "broken ownership chains", but as long as the DBA grants the object privs on the tables to the developers I don't see a problem.
Thanks in advance for any input.
JeffWho will be the owner of what the DCO's will create?|||Role "dco" will own the stored procedures/functions, ie:
Create Procedure mydb.dco.myproc ...
Create Function mydb.dco.myfunc ...
Jeff
Ownership chain question
MS SQL 7.0 on NT 4 (clustered)
Our developers use Infomaker to create many end-user's reports. They are
used to creating many Stored Proc and many "intermediate" tables instead of
creating "pure" Infomaker Reports. Report's components are often modified
directly on Production database during office hours by thoses developpers.
All of our end-user's reports are going to be rebuilt with Crystal Report.
Is it a good idea to ask our developpers to locate thoses end-user's reports
components into another databases instead of mixing DATA and REPORTS
component into the same Production Database ? (I know that we must take care
of Ownership chain)
Thank you
Danny"Danny Presse" <dpresse@.congresmtl.com> wrote in message
news:eA6M9CfXDHA.2476@.tk2msftngp13.phx.gbl...
> HI,
> MS SQL 7.0 on NT 4 (clustered)
> Our developers use Infomaker to create many end-user's reports. They are
> used to creating many Stored Proc and many "intermediate" tables instead
of
> creating "pure" Infomaker Reports. Report's components are often modified
> directly on Production database during office hours by thoses developpers.
> All of our end-user's reports are going to be rebuilt with Crystal Report.
> Is it a good idea to ask our developpers to locate thoses end-user's
reports
> components into another databases instead of mixing DATA and REPORTS
> component into the same Production Database ? (I know that we must take
care
> of Ownership chain)
Yes. Definitely.
Give them guest access only to the production database (you need this for
cross-database views to work).
Create views in their database that proxy over to the production database.
This keeps them from having to bind to the database name of the production
database.
David
Owner of tables
new tables:
drop table dbo.t_de_tracking
drop table dbo.t_MR_TRACKING
SELECT * INTO dbo.T_DE_TRACKING
FROM OPENQUERY(ACS_SLR_DE, 'SELECT * FROM ACS_SLR_DATAENTRY..
T_DE_TRACKING (nolock)')
SELECT * INTO dbo.T_MR_TRACKING
FROM OPENQUERY(ACS_SLR_MR, 'SELECT * FROM ACS_SLR_MAILROOM..
T_MR_TRACKING (nolock)')
When these tables are created the owner is NOT dbo. How can I get the owner
of these tables to become dbo when created by this stored procedure?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forum...server/200510/1Robert,
Do you really want to drop/create the tables or just TRUNCATE the table and
reload it? Probably not the best approach esp if there are a lot of records
and/or NC indexes. What are the fixed server and fixed database role
memberships of the user creating this proc? Also, you might want to build
some IF logic into testing if the tables exist prior to dropping them.
HTH
Jerry
"Robert R via droptable.com" <u3288@.uwe> wrote in message
news:55d48972e2388@.uwe...
>I have a stored procedure, that among other things drops and recreates two
> new tables:
> drop table dbo.t_de_tracking
> drop table dbo.t_MR_TRACKING
> SELECT * INTO dbo.T_DE_TRACKING
> FROM OPENQUERY(ACS_SLR_DE, 'SELECT * FROM ACS_SLR_DATAENTRY..
> T_DE_TRACKING (nolock)')
> SELECT * INTO dbo.T_MR_TRACKING
> FROM OPENQUERY(ACS_SLR_MR, 'SELECT * FROM ACS_SLR_MAILROOM..
> T_MR_TRACKING (nolock)')
> When these tables are created the owner is NOT dbo. How can I get the
> owner
> of these tables to become dbo when created by this stored procedure?
>
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forum...server/200510/1|||Try naming the stored procedure dbo.[sproc name]
--
Thomas
"Robert R via droptable.com" wrote:
> I have a stored procedure, that among other things drops and recreates two
> new tables:
> drop table dbo.t_de_tracking
> drop table dbo.t_MR_TRACKING
> SELECT * INTO dbo.T_DE_TRACKING
> FROM OPENQUERY(ACS_SLR_DE, 'SELECT * FROM ACS_SLR_DATAENTRY..
> T_DE_TRACKING (nolock)')
> SELECT * INTO dbo.T_MR_TRACKING
> FROM OPENQUERY(ACS_SLR_MR, 'SELECT * FROM ACS_SLR_MAILROOM..
> T_MR_TRACKING (nolock)')
> When these tables are created the owner is NOT dbo. How can I get the owne
r
> of these tables to become dbo when created by this stored procedure?
>
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forum...server/200510/1
>
Owner of tables
new tables:
drop table dbo.t_de_tracking
drop table dbo.t_MR_TRACKING
SELECT * INTO dbo.T_DE_TRACKING
FROM OPENQUERY(ACS_SLR_DE, 'SELECT * FROM ACS_SLR_DATAENTRY..
T_DE_TRACKING (nolock)')
SELECT * INTO dbo.T_MR_TRACKING
FROM OPENQUERY(ACS_SLR_MR, 'SELECT * FROM ACS_SLR_MAILROOM..
T_MR_TRACKING (nolock)')
When these tables are created the owner is NOT dbo. How can I get the owner
of these tables to become dbo when created by this stored procedure?
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200510/1Robert,
Do you really want to drop/create the tables or just TRUNCATE the table and
reload it? Probably not the best approach esp if there are a lot of records
and/or NC indexes. What are the fixed server and fixed database role
memberships of the user creating this proc? Also, you might want to build
some IF logic into testing if the tables exist prior to dropping them.
HTH
Jerry
"Robert R via SQLMonster.com" <u3288@.uwe> wrote in message
news:55d48972e2388@.uwe...
>I have a stored procedure, that among other things drops and recreates two
> new tables:
> drop table dbo.t_de_tracking
> drop table dbo.t_MR_TRACKING
> SELECT * INTO dbo.T_DE_TRACKING
> FROM OPENQUERY(ACS_SLR_DE, 'SELECT * FROM ACS_SLR_DATAENTRY..
> T_DE_TRACKING (nolock)')
> SELECT * INTO dbo.T_MR_TRACKING
> FROM OPENQUERY(ACS_SLR_MR, 'SELECT * FROM ACS_SLR_MAILROOM..
> T_MR_TRACKING (nolock)')
> When these tables are created the owner is NOT dbo. How can I get the
> owner
> of these tables to become dbo when created by this stored procedure?
>
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200510/1|||Try naming the stored procedure dbo.[sproc name]
--
Thomas
"Robert R via SQLMonster.com" wrote:
> I have a stored procedure, that among other things drops and recreates two
> new tables:
> drop table dbo.t_de_tracking
> drop table dbo.t_MR_TRACKING
> SELECT * INTO dbo.T_DE_TRACKING
> FROM OPENQUERY(ACS_SLR_DE, 'SELECT * FROM ACS_SLR_DATAENTRY..
> T_DE_TRACKING (nolock)')
> SELECT * INTO dbo.T_MR_TRACKING
> FROM OPENQUERY(ACS_SLR_MR, 'SELECT * FROM ACS_SLR_MAILROOM..
> T_MR_TRACKING (nolock)')
> When these tables are created the owner is NOT dbo. How can I get the owner
> of these tables to become dbo when created by this stored procedure?
>
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200510/1
>
Owner of tables
new tables:
drop table dbo.t_de_tracking
drop table dbo.t_MR_TRACKING
SELECT * INTO dbo.T_DE_TRACKING
FROM OPENQUERY(ACS_SLR_DE, 'SELECT * FROM ACS_SLR_DATAENTRY..
T_DE_TRACKING (nolock)')
SELECT * INTO dbo.T_MR_TRACKING
FROM OPENQUERY(ACS_SLR_MR, 'SELECT * FROM ACS_SLR_MAILROOM..
T_MR_TRACKING (nolock)')
When these tables are created the owner is NOT dbo. How can I get the owner
of these tables to become dbo when created by this stored procedure?
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums...erver/200510/1
Robert,
Do you really want to drop/create the tables or just TRUNCATE the table and
reload it? Probably not the best approach esp if there are a lot of records
and/or NC indexes. What are the fixed server and fixed database role
memberships of the user creating this proc? Also, you might want to build
some IF logic into testing if the tables exist prior to dropping them.
HTH
Jerry
"Robert R via droptable.com" <u3288@.uwe> wrote in message
news:55d48972e2388@.uwe...
>I have a stored procedure, that among other things drops and recreates two
> new tables:
> drop table dbo.t_de_tracking
> drop table dbo.t_MR_TRACKING
> SELECT * INTO dbo.T_DE_TRACKING
> FROM OPENQUERY(ACS_SLR_DE, 'SELECT * FROM ACS_SLR_DATAENTRY..
> T_DE_TRACKING (nolock)')
> SELECT * INTO dbo.T_MR_TRACKING
> FROM OPENQUERY(ACS_SLR_MR, 'SELECT * FROM ACS_SLR_MAILROOM..
> T_MR_TRACKING (nolock)')
> When these tables are created the owner is NOT dbo. How can I get the
> owner
> of these tables to become dbo when created by this stored procedure?
>
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forums...erver/200510/1
|||Try naming the stored procedure dbo.[sproc name]
Thomas
"Robert R via droptable.com" wrote:
> I have a stored procedure, that among other things drops and recreates two
> new tables:
> drop table dbo.t_de_tracking
> drop table dbo.t_MR_TRACKING
> SELECT * INTO dbo.T_DE_TRACKING
> FROM OPENQUERY(ACS_SLR_DE, 'SELECT * FROM ACS_SLR_DATAENTRY..
> T_DE_TRACKING (nolock)')
> SELECT * INTO dbo.T_MR_TRACKING
> FROM OPENQUERY(ACS_SLR_MR, 'SELECT * FROM ACS_SLR_MAILROOM..
> T_MR_TRACKING (nolock)')
> When these tables are created the owner is NOT dbo. How can I get the owner
> of these tables to become dbo when created by this stored procedure?
>
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forums...erver/200510/1
>
Owner of table creation in SP
I've got a problem with table creation in stored procedures (SQL Server 2000). We've got an application where the user login only has rights to execute stored procedures. The problem is that a stored proc is dynamically creating a table and so the owner of that table is being assigned to whatever login the application is using instead of dbo. It's causing numerous issues. Is there any way that this can be avoided or changed without granting the user sa privileges?
Thanks in advance,
CatI never write application code that creates permanent database objects on the fly. it's bad news more often than not.|||Hi Thras,
They're not actually permanent, but they need to stick around for longer than the stored proc that's creating them.
Cat|||How about this then. Create the table as dbo.<tablename> with the needed columns, a guid as the PK, and a DatePopulated column as an indexed column. When the user needs to insert data, grab a guid and getdate() and put it all in the table. Save the guid for later use.
If the user needs to retrieve the data, use the held guid to access it.
Have a scheduled job that runs periodically (daily, hourly, whatever) that deletes from the table after the desired retention period has expired.
No more tables created by users, schema gets backed up with your backups, everyone is happy!|||????
What happens when the same sproc executes at the same time?|||Each spid gets a different guid ... unique dataset per loser ... err I mean user.
ALso have seen that technique used for delayed paging ... sweep data every 20 minutes or so ro remove stale data.|||no, not your idea to have 1 table...what happens when the sproc is creating a table and it is executed at the same time?
ka boom
why not a temp table?|||I saw a design once where the developer appended the tables he created on the fly with the user name and if there was a table already there he would add a incrementing number after the user name. it was amazingly bad and problematic and junked up the database and the execution plans something fierce because the table were not always removed. I offered another solution. He rejected it because it was too much work. he was my boss. My tenure there was short.|||I saw a design once where the developer appended the tables he created on the fly with the user name and if there was a table already there he would add a incrementing number after the user name. it was amazingly bad and problematic and junked up the database and the execution plans something fierce because the table were not always removed. I offered another solution. He rejected it because it was too much work. he was my boss. My tenure there was short.
Was the boss short?|||no, not your idea to have 1 table...what happens when the sproc is creating a table and it is executed at the same time?
ka boom
why not a temp table?
Sorry Brett ... real live dba creates one and only one table before any proc runs. Proc inserts into the table, and retrieves data as needed. Scheduled job clears the table of stale data.
Proc does not create table, that way no loser tables to clean up!
Was the boss short?
With pointy hair!|||Was the boss short?
no. he was rather tall.|||The table name has an identifier imbedded into it, it works like Tom's suggestion but the data is distributed into separate tables with a look-up for the TableID. Please note, this was not my design so please don't shoot me! :shocked:
There is a nightly process to go through and drop the tables (which is one place where I'm running into the difficulty of having them created by the user login).
Cat|||got some examples?|||It's all very ugly because any stored procs that access these tables have to use dynamic SQL statements like...
declare @.cmd varchar(1000)
set @.cmd = 'select * from tDynBrokerage' + convert(varchar(10), @.tabid)
exec @.cmd
where @.tabid is passed in from the application. I don't like it but I'm stuck with it for the time being. That's what I get for going on maternity leave! So... back to the original question. Any thoughts on that?
Thanks,
Cat|||Why can't these be global temps again?
Wednesday, March 7, 2012
Owner
user called 'appWebUser'. If I'm logged under another user, i.e.
(appTrainer), I have to qualify the owner.object to access it instead of
just the object name. Is there any way to just use the object name like the
dbo owner. For example.
Currently: logged in as appTrainer
select * from appWebUser.table1
What I Want: logged in as appTrainer
select * from table1Sorry, I should have included ...
If you want to get rid of the owner.object issue, change all of the object
owners to dbo.
"Morgan" <mfears@.spamcop.net> wrote in message
news:eCBjzulcDHA.3620@.TK2MSFTNGP11.phx.gbl...
> As long as all of the object names are unique for each type of object
> (table, view, etc) across the whole database, you can rename them using
> sp_changeobjectowner. However, as you'll see in BOL, any permissions on
the
> table must be reapplied once you change the owner. I would go about this
> very carefully.
>
> "Tim" <Tim@.NOSPAM.com> wrote in message
> news:#8dn8XlcDHA.2436@.TK2MSFTNGP12.phx.gbl...
> > I have a few objects (views, tables, stored procedures) that are owned
by
> a
> > user called 'appWebUser'. If I'm logged under another user, i.e.
> > (appTrainer), I have to qualify the owner.object to access it instead of
> > just the object name. Is there any way to just use the object name like
> the
> > dbo owner. For example.
> >
> > Currently: logged in as appTrainer
> > select * from appWebUser.table1
> >
> > What I Want: logged in as appTrainer
> > select * from table1
> >
> >
>
Saturday, February 25, 2012
Overriding Defaults
I have a stored procedure that has @.BeginDate and @.EndDate as parameters. I created a report with a default for both. They run just fine. After I deployed, I created Linked Reports and wanted to override the defaults. In the defaults, I tried to put in GetDate() for @.BeginDate and GetDate()+10 for the @.EndDate so this can be passed in the where statement of the stored procedure. I get 'Syntax error converting datetime from character string.'
What I assume is that if I override the default, the stored procedure will process what I put in the @.BeginDate and @.EndDate parameters.
The where statement looks like:
and (m.BeginDate >= @.BeginDate) and (m.EndDate <= @.EndDate)
I'm using Reporting Services 2005 and SQL Server 2005.
Thanks, Iris
I have figured out my own problem. In my where statement, in the stored procedure, I put in a case statement that calculates for today, yesterday, etc. for the BeginDate and EndDate parameters. I then created available values for today, yesterday, etc. I made the default for both parameters 'Today'. Then when I created my linked report from my master report, there was a drop down to select my date range options.
Thanks, Iris
Overloaded Stored Procedure
I want to create two stored procedures with the same name but accepting different params. Lets say Procedure A can accept param1, Procedure A can also accept param1,param2.
Is there any way SQL Server supports overloaded procedures with different sigantures.
Regards
ImtiazYou can use a feature called numbered stored procedures. So you can create somesp;1 and somesp;2. But I would not recommend doing this since the feature is being deprecated. In general, for data access or modifications you want to keep the TSQL interfaces simple to use.|||Thanks..As expected u had replied.....|||Can we also have something like overloaded functions in SQL 2005.|||No, but optional parameters are supported: CREATE PROC YourProc @.Param1 INT, @.Param2 INT = 0 AS ... -- Adam MachanicPro SQL Server 2005, available nowhttp://www..apress.com/book/bookDisplay.html?bID=457-- <Imtiaz@.discussions.microsoft.com> wrote in message news:5dcaf46b-f56a-452b-b9a3-dc23f7c4e7a9@.discussions.microsoft.com...HiI want to create two stored procedures with the same name but accepting different params. Lets say Procedure A can accept param1, Procedure A can also accept param1,param2.Is there any way SQL Server supports overloaded procedures with different sigantures.RegardsImtiaz
Monday, February 20, 2012
Overflowed int column
following error:
Server: Msg 248, Level 16, State 1, Procedure
sp_MarriageLookup, Line 34
The conversion of the varchar value '2820103430'
overflowed an int column. Maximum integer value exceeded.
Stored Procedure: Unifirst919.dbo.sp_MarriageLookup
Return Code = -6
Output Parameter(s):
@.Barcode = 2820103430
The length of the @.Barcode variable is 10 characters and
it is a nchar type variable so I don't understand the 'int
column' overflow issue.
Also, the wierd thing is that I can run this same stored
procedure on another server (similar database, just
different data) and it works just fine. See below:
Stored Procedure: Unifirst.dbo.sp_MarriageLookup
Return Code = 0
Output Parameter(s):
@.Barcode = 4320000849
I don't see anything noticibly different between the two
databases.
What causes this problem and how do I correct?
Roger.Maximum value for int (SQL Server) data type is 2147483647.
So, if you convert varchar value '2820103430' to int,
overflow error is normal.
In the secend case, when this stored procedure
work correctly on another server, check data type
of the column to which this sp convert varchar parameter
ph
> I am trying to run a stored procedure but I get the
> following error:
> Server: Msg 248, Level 16, State 1, Procedure
> sp_MarriageLookup, Line 34
> The conversion of the varchar value '2820103430'
> overflowed an int column. Maximum integer value exceeded.
> Stored Procedure: Unifirst919.dbo.sp_MarriageLookup
> Return Code = -6
> Output Parameter(s):
> @.Barcode = 2820103430
> The length of the @.Barcode variable is 10 characters and
> it is a nchar type variable so I don't understand the 'int
> column' overflow issue.
> Also, the wierd thing is that I can run this same stored
> procedure on another server (similar database, just
> different data) and it works just fine. See below:
> Stored Procedure: Unifirst.dbo.sp_MarriageLookup
> Return Code = 0
> Output Parameter(s):
> @.Barcode = 4320000849
> I don't see anything noticibly different between the two
> databases.
> What causes this problem and how do I correct?
> Roger.
Overflowed int column
following error:
Server: Msg 248, Level 16, State 1, Procedure
sp_MarriageLookup, Line 34
The conversion of the varchar value '2820103430'
overflowed an int column. Maximum integer value exceeded.
Stored Procedure: Unifirst919.dbo.sp_MarriageLookup
Return Code = -6
Output Parameter(s):
@.Barcode = 2820103430
The length of the @.Barcode variable is 10 characters and
it is a nchar type variable so I don't understand the 'int
column' overflow issue.
Also, the wierd thing is that I can run this same stored
procedure on another server (similar database, just
different data) and it works just fine. See below:
Stored Procedure: Unifirst.dbo.sp_MarriageLookup
Return Code = 0
Output Parameter(s):
@.Barcode = 4320000849
I don't see anything noticibly different between the two
databases.
What causes this problem and how do I correct?
Roger.Here is a copy of the SP:
/*
Purpose: Given a marriage tag code, return the original
barcode
Input params: Marriage tag code, type varchar(10)
Output params: Original bar code, type varchar(10)
Error params: Output of '9' indicates lookup failed
*/
CREATE PROCEDURE sp_MarriageLookup
@.Tag varchar(10),
@.Barcode varchar(10) output
AS
Declare @.eventid int
Declare cur_Marriage cursor for
Select CurrentTag from tblMarriage
Where MarriageTag = @.Tag
Open cur_Marriage
Fetch next from cur_Marriage into @.Barcode
If (@.@.Fetch_Status <> 0)
Begin
/* Error log: Lookup failed */
Set @.Barcode = 9
Set @.eventid = 901
End
Else Begin
Fetch Next from cur_Marriage into @.Barcode
If (@.@.Fetch_Status = 0)
Begin
/* Error log: More than one entry for the marriage
code */
Set @.Barcode = 9
Set @.eventid = 902
End
End
Close cur_Marriage
Deallocate cur_Marriage
If (@.Barcode = 9)
Begin
Execute sp_LogError 1, @.eventid, 0, @.Tag
End
GO
>--Original Message--
>I am trying to run a stored procedure but I get the
>following error:
>Server: Msg 248, Level 16, State 1, Procedure
>sp_MarriageLookup, Line 34
>The conversion of the varchar value '2820103430'
>overflowed an int column. Maximum integer value exceeded.
>Stored Procedure: Unifirst919.dbo.sp_MarriageLookup
> Return Code = -6
> Output Parameter(s):
> @.Barcode = 2820103430
>The length of the @.Barcode variable is 10 characters and
>it is a nchar type variable so I don't understand
the 'int
>column' overflow issue.
>Also, the wierd thing is that I can run this same stored
>procedure on another server (similar database, just
>different data) and it works just fine. See below:
>Stored Procedure: Unifirst.dbo.sp_MarriageLookup
> Return Code = 0
> Output Parameter(s):
> @.Barcode = 4320000849
>I don't see anything noticibly different between the two
>databases.
>What causes this problem and how do I correct?
>Roger.
>.
>|||Maximum value for int (SQL Server) data type is 2147483647.
So, if you convert varchar value '2820103430' to int,
overflow error is normal.
In the secend case, when this stored procedure
work correctly on another server, check data type
of the column to which this sp convert varchar parameter
ph
> I am trying to run a stored procedure but I get the
> following error:
> Server: Msg 248, Level 16, State 1, Procedure
> sp_MarriageLookup, Line 34
> The conversion of the varchar value '2820103430'
> overflowed an int column. Maximum integer value exceeded.
> Stored Procedure: Unifirst919.dbo.sp_MarriageLookup
> Return Code = -6
> Output Parameter(s):
> @.Barcode = 2820103430
> The length of the @.Barcode variable is 10 characters and
> it is a nchar type variable so I don't understand the 'int
> column' overflow issue.
> Also, the wierd thing is that I can run this same stored
> procedure on another server (similar database, just
> different data) and it works just fine. See below:
> Stored Procedure: Unifirst.dbo.sp_MarriageLookup
> Return Code = 0
> Output Parameter(s):
> @.Barcode = 4320000849
> I don't see anything noticibly different between the two
> databases.
> What causes this problem and how do I correct?
> Roger.
Overflow error that doesn't make sense....
I keep receiving the following error:
Server: Msg 8115, Level 16, State 8, Line 140
Arithmetic overflow error converting numeric to data type numeric.
The data values all appear to be correct with none of them seeming to be out of precision, but I keep getting the error. I've tried casting all the values and it didn't work. It executes w/o error when I comment out that particular insert. I just don't get it.
Any help would be appreciated. Thanks.
Code below:
-
declare @.dtAsOfdate DATETIME
set @.dtAsOfDate = '2006-04-16';
DECLARE @.RC INTEGER
-
-- 1) Eligible Investments:
-
-- Input: @.SPVId - SPV we are running process for
-- @.Yes - value of enum CCPEnum::eYesNoYes (get by lookup).
-- Output: Recordset (temp table) of Collaterals that are eligible for MV Test (#MVTriggerInvestments).
DECLARE @.Yes INTEGER
EXEC @.RC = [dbo].CPLookupVal 'YesNo', 'Yes', @.Yes OUTPUT
IF (@.RC<>0)BEGIN
RAISERROR ('SP_OCCalculationMVTriggerTest: Failed to find Yes enum', 16, 1) WITH SETERROR
END
drop table #MVTriggerInvestments
BEGIN
SELECT dbal.SPVId,
dbal.CusipId,
dbal.GroupId,
@.dtAsOfDate AS AsOfDate,
dbal.NormalOCRate,
dbal.SteppedUpOCRate,
dbal.AllocMarketValue AS MarketValue,
dbal.NbrDays,
dbal.PriceChangeRatio
INTO #MVTriggerInvestments
FROM DailyCollateralBalance dbal
JOIN CollateralGroupIncludeInOC gin
ON dbal.SPVId = 2
AND gin.SPVId = 2
AND dbal.AsOfDate = '2006-04-16'
AND @.dtAsOfDate BETWEEN gin.EffectiveFrom AND gin.EffectiveTo
AND dbal.GroupId = gin.GroupId
AND gin.IncludeInOC = @.Yes
END
select * from #MVTriggerInvestments
print 'end #1'
--select * from #MVTriggerInvestments --looks ok
--
-- 2) Calculate Weighted Average Price change ratio Market Value (by Group):
-- PCRMV - Price Change Ratio Market Value
--
-- Input : Recordset of collaterals (having New/Old prices, MarketValue defined)
-- Output: Recordset Aggregated by Group (#GroupOCRate)
drop table #MVTriggerGroup
BEGIN
SELECT A.SPVId,
A.AsOfDate,
A.GroupId,
A.NormalOCRate,
A.SteppedUpOCRate,
A.MarketValue,
cast([dbo].fn_divide_or_number (B.PriceChangeRatioMarketValue, B.MarketValueForPeriod, 0.00) as numeric(12,9)) as PriceChangeRatio,
CAST (0 AS NUMERIC(12,9)) AS OCRate,
CAST ('' AS VARCHAR(6)) AS OCRateType,
CAST (0 AS NUMERIC(18,2)) AS DiscMarketValue,
CAST (0 AS NUMERIC(18,2)) AS InterestAccrued
INTO #MVTriggerGroup
FROM
(
SELECT SPVId,
AsOfDate,
GroupId,
NormalOCRate,
SteppedUpOCRate,
cast(SUM(MarketValue) as numeric(18,2)) AS MarketValue
FROM #MVTriggerInvestments
GROUP BY SPVId, AsOfDate, GroupId, NormalOCRate, SteppedUpOCRate
) A --works up to here
JOIN
(SELECT SPVId,
cast(SUM(AllocMarketValue) as numeric(18,2)) AS MarketValueForPeriod ,
cast(SUM(AllocMarketValue * PriceChangeRatio) as numeric(18,2)) as PriceChangeRatioMarketValue,
GroupId
FROM T_DailyCollateralBalance
WHERE SPVId = 2
AND AsOfDate between '2006-03-17' and '2006-04-15'
AND IsBusinessDay = 1
GROUP BY SPVId, GroupId
) B
ON A.SPVId = B.SPVId
AND A.GroupId = B.GroupId
END
print 'end #2'
-- Calculate OCRate to apply for each group.
BEGIN
UPDATE #MVTriggerGroup
SET OCRate = (CASE WHEN ((PriceChangeRatio < 0) AND ABS(PriceChangeRatio) > (0.55 * NormalOCRate)) THEN SteppedUpOCRate
ELSE NormalOCRate
END),
OCRateType = (CASE WHEN ((PriceChangeRatio < 0) AND ABS(PriceChangeRatio) > (0.55 * NormalOCRate)) THEN 'stepup'
ELSE 'normal'
END)
END
print 'end #3'
-
-- Calculate discounted Market Value
-
UPDATE #MVTriggerGroup
SET DiscMarketValue = MarketValue / (1.0 + OCRate * 0.01)
print 'end #4'
-- Insert data from temp tables
-- 1)
select * from #MVTriggerInvestments
print 'begin tran'
BEGIN TRAN
DELETE T_MVTriggerInvestments
WHERE SPVId = 2 AND AsOfDate = '2006-04-16'
print 'DELETE T_MVTriggerInvestments'
--error is here!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
INSERT T_MVTriggerInvestments
(
SPVId ,
CusipId ,
GroupId ,
AsOfDate ,
NormalOCRate ,
SteppedUpOCRate ,
MarketValue ,
NbrDays ,
OldPrice ,
NewPrice ,
PriceChangeRatio
)
SELECT SPVId ,
CusipId ,
GroupId ,
AsOfDate ,
NormalOCRate ,
SteppedUpOCRate ,
MarketValue ,
NbrDays ,
0.00 ,
0.00 ,
PriceChangeRatio
FROM #MVTriggerInvestments
print 'end mvtriggerinv select'
COMMIT TRAN
--end error!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
-- 2)
print 'begin tran 2'
BEGIN TRAN
DELETE T_OCTestGroup
WHERE SPVId = 2 AND AsOfDate = '2006-04-16'
INSERT T_OCTestGroup
(
SPVId ,
AsOfDate ,
GroupId ,
NormalOCRate ,
SteppedUpOCRate ,
MarketValue ,
PriceChangeRatio,
OCRate ,
OCRateType ,
DiscMarketValue ,
InterestAccrued ,
SectionA ,
CPFace ,
IntExpense ,
Fees ,
SectionB ,
Receivables ,
IntReceivables ,
CashBalance ,
Investments ,
SectionC ,
ExcessCollateral,
MaxCPAllowed
)
SELECT
SPVId ,
AsOfDate ,
GroupId ,
NormalOCRate ,
SteppedUpOCRate ,
MarketValue ,
PriceChangeRatio,
OCRate ,
OCRateType ,
DiscMarketValue ,
InterestAccrued ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0
FROM #MVTriggerGroup
print 'end tran 2'
COMMIT TRAN
Are the column definitions for the temp table the same for the table you're inserting into? Double-check the datatypes.|||Yes. I checked that and went through and tried casting any variables that might have been interpreted. I also checked the data and tried excluding certain records that had negative values (for example) and the error persists...
|||Maybe the permanent table has a computed column, or maybe there is a trigger on the table. Can you post the CREATE TABLE statements and the CREATE statements for any indexes, indexed views, or triggers for the tables involved? Steve Kass Drew University 7racer@.discussions.microsoft.com wrote:
> I'm troubleshooting a stored procedure that suddenly decided to stop
> working. I narrowed down the problem to the last part of the stored
> procedure where it selects data from a temp table and inserts it into a
> physical table in the SQL2000 database.
>
> I keep receiving the following error:
>
> Server: Msg 8115, Level 16, State 8, Line 140
> Arithmetic overflow error converting numeric to data type numeric.
>
> The data values all appear to be correct with none of them seeming to be
> out of precision, but I keep getting the error. I've tried casting all
> the values and it didn't work. It executes w/o error when I comment out
> that particular insert. I just don't get it.
>
> Any help would be appreciated. Thanks.
>
> Code below:
> -
>
> declare @.dtAsOfdate DATETIME
> set @.dtAsOfDate = '2006-04-16';
>
> DECLARE @.RC INTEGER
> -
> -- 1) Eligible Investments:
> -
>
> -- Input: @.SPVId - SPV we are running process for
> -- @.Yes - value of enum CCPEnum::eYesNoYes (get by
> lookup).
>
> -- Output: Recordset (temp table) of Collaterals that are eligible for
> MV Test (#MVTriggerInvestments).
>
> DECLARE @.Yes INTEGER
> EXEC @.RC = [dbo].CPLookupVal 'YesNo', 'Yes', @.Yes OUTPUT
> IF (@.RC<>0)BEGIN
> RAISERROR ('SP_OCCalculationMVTriggerTest: Failed to find Yes
> enum', 16, 1) WITH SETERROR
> END
> drop table #MVTriggerInvestments
> BEGIN
>
> SELECT dbal.SPVId,
> dbal.CusipId,
> dbal.GroupId,
> @.dtAsOfDate AS AsOfDate,
> dbal.NormalOCRate,
> dbal.SteppedUpOCRate,
> dbal.AllocMarketValue AS MarketValue,
> dbal.NbrDays,
> dbal.PriceChangeRatio
>
> INTO #MVTriggerInvestments
>
> FROM DailyCollateralBalance dbal
>
> JOIN CollateralGroupIncludeInOC gin
> ON dbal.SPVId = 2
> AND gin.SPVId = 2
> AND dbal.AsOfDate = '2006-04-16'
> AND @.dtAsOfDate BETWEEN gin.EffectiveFrom AND
> gin.EffectiveTo
> AND dbal.GroupId = gin.GroupId
> AND gin.IncludeInOC = @.Yes
>
> END
> select * from #MVTriggerInvestments
> print 'end #1'
> --select * from #MVTriggerInvestments --looks ok
>
> --
> -- 2) Calculate Weighted Average Price change ratio Market Value (by
> Group):
> -- PCRMV - Price Change Ratio Market Value
> --
>
> -- Input : Recordset of collaterals (having New/Old prices, MarketValue
> defined)
> -- Output: Recordset Aggregated by Group (#GroupOCRate)
> drop table #MVTriggerGroup
> BEGIN
>
> SELECT A.SPVId,
> A.AsOfDate,
> A.GroupId,
> A.NormalOCRate,
> A.SteppedUpOCRate,
> A.MarketValue,
>
> cast([dbo].fn_divide_or_number
> (B.PriceChangeRatioMarketValue, B.MarketValueForPeriod, 0.00) as
> numeric(12,9)) as PriceChangeRatio,
>
> CAST (0 AS NUMERIC(12,9))
> AS OCRate,
> CAST ('' AS VARCHAR(6))
> AS OCRateType,
> CAST (0 AS NUMERIC(18,2))
> AS DiscMarketValue,
> CAST (0 AS NUMERIC(18,2))
> AS InterestAccrued
>
> INTO #MVTriggerGroup
>
> FROM
> (
> SELECT SPVId,
> AsOfDate,
> GroupId,
> NormalOCRate,
> SteppedUpOCRate,
> cast(SUM(MarketValue) as numeric(18,2)) AS MarketValue
>
> FROM #MVTriggerInvestments
> GROUP BY SPVId, AsOfDate, GroupId, NormalOCRate,
> SteppedUpOCRate
> ) A --works up to here
>
> JOIN
> (SELECT SPVId,
> cast(SUM(AllocMarketValue) as numeric(18,2)) AS
> MarketValueForPeriod ,
> cast(SUM(AllocMarketValue * PriceChangeRatio) as
> numeric(18,2)) as PriceChangeRatioMarketValue,
> GroupId
>
> FROM T_DailyCollateralBalance
> WHERE SPVId = 2
> AND AsOfDate between '2006-03-17' and '2006-04-15'
> AND IsBusinessDay = 1
> GROUP BY SPVId, GroupId
> ) B
>
> ON A.SPVId = B.SPVId
> AND A.GroupId = B.GroupId
>
> END
> print 'end #2'
>
> -- Calculate OCRate to apply for each group.
>
> BEGIN
> UPDATE #MVTriggerGroup
> SET OCRate = (CASE WHEN ((PriceChangeRatio < 0) AND
> ABS(PriceChangeRatio) > (0.55 * NormalOCRate)) THEN SteppedUpOCRate
> ELSE NormalOCRate
> END),
> OCRateType = (CASE WHEN ((PriceChangeRatio < 0) AND
> ABS(PriceChangeRatio) > (0.55 * NormalOCRate)) THEN 'stepup'
> ELSE 'normal'
> END)
> END
> print 'end #3'
> -
> -- Calculate discounted Market Value
> -
> UPDATE #MVTriggerGroup
> SET DiscMarketValue = MarketValue / (1.0 + OCRate * 0.01)
> print 'end #4'
>
> -- Insert data from temp tables
>
> -- 1)
> select * from #MVTriggerInvestments
>
> print 'begin tran'
> BEGIN TRAN
> DELETE T_MVTriggerInvestments
> WHERE SPVId = 2 AND AsOfDate = '2006-04-16'
> print 'DELETE T_MVTriggerInvestments'
> --error is
> here!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> INSERT T_MVTriggerInvestments
> (
> SPVId ,
> CusipId ,
> GroupId ,
> AsOfDate ,
> NormalOCRate ,
> SteppedUpOCRate ,
> MarketValue ,
> NbrDays ,
> OldPrice ,
> NewPrice ,
> PriceChangeRatio
> )
> SELECT SPVId ,
> CusipId ,
> GroupId ,
> AsOfDate ,
> NormalOCRate ,
> SteppedUpOCRate ,
> MarketValue ,
> NbrDays ,
> 0.00 ,
> 0.00 ,
> PriceChangeRatio
>
> FROM #MVTriggerInvestments
> print 'end mvtriggerinv select'
> COMMIT TRAN
> --end
> error!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!
> -- 2)
> print 'begin tran 2'
> BEGIN TRAN
> DELETE T_OCTestGroup
> WHERE SPVId = 2 AND AsOfDate = '2006-04-16'
>
> INSERT T_OCTestGroup
> (
> SPVId ,
> AsOfDate ,
> GroupId ,
> NormalOCRate ,
> SteppedUpOCRate ,
> MarketValue ,
> PriceChangeRatio,
> OCRate ,
> OCRateType ,
> DiscMarketValue ,
> InterestAccrued ,
> SectionA ,
> CPFace ,
> IntExpense ,
> Fees ,
> SectionB ,
> Receivables ,
> IntReceivables ,
> CashBalance ,
> Investments ,
> SectionC ,
> ExcessCollateral,
> MaxCPAllowed
> )
> SELECT
> SPVId ,
> AsOfDate ,
> GroupId ,
> NormalOCRate ,
> SteppedUpOCRate ,
> MarketValue ,
> PriceChangeRatio,
> OCRate ,
> OCRateType ,
> DiscMarketValue ,
> InterestAccrued ,
> 0 ,
> 0 ,
> 0 ,
> 0 ,
> 0 ,
> 0 ,
> 0 ,
> 0 ,
> 0 ,
> 0 ,
> 0 ,
> 0
>
> FROM #MVTriggerGroup
> print 'end tran 2'
> COMMIT TRAN
>
>
>
>