Showing posts with label tasks. Show all posts
Showing posts with label tasks. Show all posts

Monday, March 26, 2012

Package with File System Task doesn't work without sensitive data with user keys

This problem is a bit weird but I'm just wondering if anybody else experienced this.

I have a package that has file system tasks (copying dtsx files actually). Basically the package copies other packages to a pre-defined destination. Thing is, it only works if one of the packages it is configured to copy has some sort of sensitive data (e.g., a connectionstring with a password), otherwise it reports a success message on execution but doesn't actually do anything. I've checked the forcedexecutionresult and it is set to None for that matter.

Just wondering if anybody else experienced this problem and of course if there's a way to solve it.

Thanks.Could the security permissions be different on your packages that are failing to be moved from those that are succeeding?|||No, they're all set to EncryptSensitiveWithUserKey.

However, all of the packages involved have their sensitive information set to obtaining them from Indirect Configuration Files. The only way around that I found so far was to put in a dummy connection object, set up an XML file package configuration for that dummy, and deploy it that way.

I'm suspecting that the user key isn't generated at all when all the packages get their configs from Indirect Configuration Files. For some reason, DontSaveSensitive doesn't work either. My original package insists on the presence of a key. Might have something to do with file read/write/modify permissions that come with the key?

I still want to do away with the dummy though.

package variables passed to Execute SQL Task

Hi,

I am having trouble getting the Execute SQL Task to recognize the package-level variables in an SSIS package. The tasks fail execution. The package contains three Execute SQL tasks and none of them receive the variables. However, when I replace the '?' in the code with the variable values they execute fine. These are the steps I have taken thus far:

Made sure the three variables are package level.

Verified I'm using an OLE DB valid connection.

Verified the variables are properly mapped in the parameters mapping section of the task properties.

Tried all 4 types of variable to parameter mapping (even though '?' in the query and '0,1,2..' in the name should work).

Changed the Bypassprepare property to 'True'.

Breakpoints and watch windows to verify the variable values.

I actually tried the simple tutorials on a one-liner SQL statement that uses variables, but had the same problem. I know there must be something simple that I'm missing here, but any info would be greatly appreciated!

Thanks!

Also, the code for the first package that holds the parameter is:

DECLARE @.DatabaseName varchar(100)
SET @.DatabaseName = ?
DECLARE @.SPID smallint

And, of course, when I replace the ? with the DB name in the Value field of the variable it executes.

The Parameter Mapping has the following values:

User:Big Smileatabasename Input Varchar 0

And the Variables set up has the following values:

databasename package string DB_NAME

Is there a way to see the query after it reads the variables to see if I have something syntactically incorrect in the values? Please let me know if any further info would help.. I'm really stumped on this since I get the same results with examples found everywhere else too.

Thanks!|||

You could use an alternative approach to built the SQL statement. Create a new variable, let's say SQLStatement; then use an expression (set EvaluateAsExpresion=True) in that variable to form the SQL statement. something like:

"Select * from " + @.[User::Variable1]

Then in the execute SQL task set SQLSourceType=Variable and select the variable name in SourceVariable drop-down list.

I like this approach because you an see easily how the sql statement is being built.

|||Thanks for the quick reply Rafael. I am unclear as to how this would resolve the issue. The package contains three lengthy Execute SQL tasks, and the package needs to be replicated for several different databases that are independently accessed in a development environment. My original idea was to use the package variable to simply change the DB name. Are you saying I should try changing each task to a SQLSourceType=Variable and use a SQLStatement-like variable that uses an expression for the code presently used in the Execute SQL task? If so, it's definitely worth a try, but I'm still unclear as to how I'm misusing the package variables and I'm sure I'll find future situations where they will be useful. Sad Any ideas?

Thanks again!
|||

jheywood wrote:

Are you saying I should try changing each task to a SQLSourceType=Variable and use a SQLStatement-like variable that uses an expression for the code presently used in the Execute SQL task?
Thanks again!

Yes, that is my idea. That give you the benefit of testing the expression and check how exactly the sql statement is gong to be sent.

jheywood wrote:

but I'm still unclear as to how I'm misusing the package variables and I'm sure I'll find future situations where they will be useful. Any ideas?

Not sure..did you try changing the value of ByPassPrepare in the execute sql task?

|||Yes, I tried that too. I was hoping it was maybe just some stupid syntax problem that I might have overlooked with the parameter or the variable, but I guess not. Sad

I'll try the other method since that definitely seems like a viable option. Thanks so much for the help!

Please let me know if you think of anything else about the var use, or need more information. I'm really stumped on it.
sql

Wednesday, March 21, 2012

Package hangs on parallel "upserts"

I have data flow tasks, one which validates the import file and one which processes the import file if the validation passed. The validation runs and pushes the three row types to three different recordset destinations. When I enter the processing data flow task, I have three parallel trees processing each recordset saved in the previous task. I'm using a script component to generate the rows which are then sorted and merged with the production database to find existing records. Based on this, I split to an OLE DB command (running an UPDATE command) or OLE DB destination (to simply insert the records.)

In this particular case, all records are being updated and nothing is being inserted new. Two of the three trees will complete the sort but hang on the merge, split, and OLE DB command components. The other will do the same but also hang on the split.

In another case, I truncated each destination table before running the package and the package runs fine.

Are toes being stepped on in the data flow task and causing a deadlock?

Update: I removed the sort transformation and sorted the rows before pushing them to the recordsets and I still get the same results.

If you were getting a deadlock then I think SQL Server would capture this and throw an error. That's not what's happening here. Its more likely that a lock is occurring. Use sp_who2 and sp_lock to determine if blocking is occurring.

-Jamie

|||

A common issue when your modifying the same table with multiple paths in your data flow is that you will have a deadlock on the destination table. To resolve:

1) choose "Fast load" and "Don't lock table" in the OLEDB Destination

2) ensure that you don't have a race condition between the paths, i.e. that the updates depends on the inserts _in the same run_ have reached the DB before the updates.

Cheers/Kristian

|||I checked the sp_who2 stored procedure and nothing was in the BlkBy column.
Fast load is being used and I chose "don't lock table" but that didn't fix the issue.
The updates don't depend on the inserts. I merge the input with production on keys and split on NULL values in production.

I ended up splitting the three trees into three data tasks to see if that would fix the issue but I get the same results, only this time I noticed no all rows have left the merge. The merge is a left outer join on keys and in this case all records input are already in the database. So I'm joining 62k input records with the same number of (identical) production records and 9,937 rows leave the merge. Also, when running the package, that task remains yellow.
|||

The solution is pretty simple. Do the inserts and updates in seperate data-flows. You can use raw files to pass data between data-flows.

-Jamie

Friday, March 9, 2012

P.S. Tasks > Export

I have spent the last 2 months or so designing and modeling my new application backend in SQL Server 2005 Express edition.

I am ready to move it to a production copy of SQL Server 2005.

How do I go about this?

What kind of words can I put into google to bring some sort of results.

I have been looking in many places but no luck on how to take all the tables I have made (some with data, some not) and copy them to a live SQL 2005 Server.

Thanks in Advance for any help.

When I right-click on the Database in the Manager, I choose Tasks..

But Import / Export options are NOT THERE.

What do I do?

Why is this option not in my instance of SQL Express.

Thank you.

|||Depending on how you want the database hosted you could use detach/attach, backup/restore or scripting of schema and data.|||

How can I produce a schema? I'm assuming that is what I need to do. I am moving the database from my local machine (SQL Server Express) to a hosted SQL Server 2005 machine.

Thank you for your help,

Jon

|||You would need to write a few lines of code that call SMO.|||Jon,
I am in the same boat with my Express db. I have signed up with CrystalTech for hosting. Did you find a good solution to your problem?

Greg Lamb