Showing posts with label total. Show all posts
Showing posts with label total. Show all posts

Wednesday, March 7, 2012

ow to simplify what i'm doing (AS the solution?)

i need to do create reports that perform queries where the results are
percentages of total count divided by counts based on different
subquery where values.
a la ...
select 100 * (totalCount/ (select count where columnA value is x) as
partialCount1) as perc1,
100 * (totalCount/ (select count where columnA value is y) as
partialCount2) as perc2
etc
from myTable where (..bunch of where stuff..)
I have simplified the above example just for brevity here.
So, I hate these kinds of nested queries - and they get incredibly
complex to read. I could break them up into longer scripts to simplify
- but I am wondering if there's another way.
The data I am using is from a warehouse on SQL 2005. I have very little
experience with warehouses - and know little about analysis services
and OLAP.
My question is whether it is possible to use Analysis services to
create new tables that make this data far easier to query? That's the
path I am considering venturing down, but I don't want to hit a dead
end... and need just to know if this makes sense. That is to use
Analysis services to crunch out some new tables from the tables I am
using which would have the results neatly packaged in a table or 2 that
is fast and simple to query. Is that the idea of AS?
thx for helping a learner here. let me know if i am making enough sense
in the question.
Hi there,
you're stating that you use a datawarehouse. How is the structure in the DW
? Is it a relational star schema ? Some of the idea in using DW's is that
you transfer your data from your OLTP systems as they are not well suited
for reporting because you typically have to join, group and aggregate your
data in very complex ways. The star schema structure is relational as well
but far more intuitive and better suited for reporting. Why AS then ? AS
won't create new tables for you but place the data in cubes which is a
multidimensional storage. The point here is that you preaggreagate data and
by this speed up queries by actually querying your cubes, not the realtional
data. You could say that you're violating a lot of the rules for relational
design to get the better performance. In AS you will work with mesures and
dimensions. This could be Sales per customer. Sales is a measure and
customer is a dimension.
So the process is normally :
OLTP -> Staging -> DW -> Cube <- Report -- Your reports query A.S Cube
I can't figure out whether you are querying your OLTP system now. For
complex reports this is normally a bad idea. But it is hard for me to tell
if A.S is the right way to go not knowing your exact need for output.
Hope this enlighten things a little bit anyway.
Regards
Bobby Henningsen
<rsphere@.gmail.com> skrev i en meddelelse
news:1137857235.371713.224100@.g43g2000cwa.googlegr oups.com...
>i need to do create reports that perform queries where the results are
> percentages of total count divided by counts based on different
> subquery where values.
> a la ...
> select 100 * (totalCount/ (select count where columnA value is x) as
> partialCount1) as perc1,
> 100 * (totalCount/ (select count where columnA value is y) as
> partialCount2) as perc2
> etc
> from myTable where (..bunch of where stuff..)
> I have simplified the above example just for brevity here.
> So, I hate these kinds of nested queries - and they get incredibly
> complex to read. I could break them up into longer scripts to simplify
> - but I am wondering if there's another way.
> The data I am using is from a warehouse on SQL 2005. I have very little
> experience with warehouses - and know little about analysis services
> and OLAP.
> My question is whether it is possible to use Analysis services to
> create new tables that make this data far easier to query? That's the
> path I am considering venturing down, but I don't want to hit a dead
> end... and need just to know if this makes sense. That is to use
> Analysis services to crunch out some new tables from the tables I am
> using which would have the results neatly packaged in a table or 2 that
> is fast and simple to query. Is that the idea of AS?
> thx for helping a learner here. let me know if i am making enough sense
> in the question.
>

ow to simplify what i'm doing (AS the solution?)

i need to do create reports that perform queries where the results are
percentages of total count divided by counts based on different
subquery where values.
a la ...
select 100 * (totalCount/ (select count where columnA value is x) as
partialCount1) as perc1,
100 * (totalCount/ (select count where columnA value is y) as
partialCount2) as perc2
etc
from myTable where (..bunch of where stuff..)
I have simplified the above example just for brevity here.
So, I hate these kinds of nested queries - and they get incredibly
complex to read. I could break them up into longer scripts to simplify
- but I am wondering if there's another way.
The data I am using is from a warehouse on SQL 2005. I have very little
experience with warehouses - and know little about analysis services
and OLAP.
My question is whether it is possible to use Analysis services to
create new tables that make this data far easier to query? That's the
path I am considering venturing down, but I don't want to hit a dead
end... and need just to know if this makes sense. That is to use
Analysis services to crunch out some new tables from the tables I am
using which would have the results neatly packaged in a table or 2 that
is fast and simple to query. Is that the idea of AS?
thx for helping a learner here. let me know if i am making enough sense
in the question.Hi there,
you're stating that you use a datawarehouse. How is the structure in the DW
? Is it a relational star schema ? Some of the idea in using DW's is that
you transfer your data from your OLTP systems as they are not well suited
for reporting because you typically have to join, group and aggregate your
data in very complex ways. The star schema structure is relational as well
but far more intuitive and better suited for reporting. Why AS then ? AS
won't create new tables for you but place the data in cubes which is a
multidimensional storage. The point here is that you preaggreagate data and
by this speed up queries by actually querying your cubes, not the realtional
data. You could say that you're violating a lot of the rules for relational
design to get the better performance. In AS you will work with mesures and
dimensions. This could be Sales per customer. Sales is a measure and
customer is a dimension.
So the process is normally :
OLTP -> Staging -> DW -> Cube <- Report -- Your reports query A.S Cube
I can't figure out whether you are querying your OLTP system now. For
complex reports this is normally a bad idea. But it is hard for me to tell
if A.S is the right way to go not knowing your exact need for output.
Hope this enlighten things a little bit anyway.
Regards
Bobby Henningsen
<rsphere@.gmail.com> skrev i en meddelelse
news:1137857235.371713.224100@.g43g2000cwa.googlegroups.com...
>i need to do create reports that perform queries where the results are
> percentages of total count divided by counts based on different
> subquery where values.
> a la ...
> select 100 * (totalCount/ (select count where columnA value is x) as
> partialCount1) as perc1,
> 100 * (totalCount/ (select count where columnA value is y) as
> partialCount2) as perc2
> etc
> from myTable where (..bunch of where stuff..)
> I have simplified the above example just for brevity here.
> So, I hate these kinds of nested queries - and they get incredibly
> complex to read. I could break them up into longer scripts to simplify
> - but I am wondering if there's another way.
> The data I am using is from a warehouse on SQL 2005. I have very little
> experience with warehouses - and know little about analysis services
> and OLAP.
> My question is whether it is possible to use Analysis services to
> create new tables that make this data far easier to query? That's the
> path I am considering venturing down, but I don't want to hit a dead
> end... and need just to know if this makes sense. That is to use
> Analysis services to crunch out some new tables from the tables I am
> using which would have the results neatly packaged in a table or 2 that
> is fast and simple to query. Is that the idea of AS?
> thx for helping a learner here. let me know if i am making enough sense
> in the question.
>

Saturday, February 25, 2012

overriding aggregate value of a measure to only total based on the children of the an attributes

Going off of Adventure Works, there's a product attribute:

[Product].[Product].[Product]

and now I also have an old product attribute:

[Product].[Old Product].[Old Product]

(people wanted to see the before and after view of different product assignment).

I also have a estimate of sales for each product and old product. When using just this measure - that amount is around twice as much as before - that's ok, but when using just product or just old product, I just want to see the sum of its children.

How could one define a calculated measure so that if the current level is [Product].[Product].All, then the aggregate should be the sum of the children?

This is sort of how I'd be using it:

select [estimated sales] on 0, [Product].[Product].[Product] on 1

from [Adventure Works]

the all line should be the sum of the measure used for all the children (new behavior) and each product row would have the the estimated sales - so this would stay the same.

thinking about this a little more - the 'All' level would be the curren tlevel if the attribute wasn't even used, so I'm not sure what to do to really depict which product attribute was added. And if a person added both? I'd probably want to just show the product totals.

What I'm going to try for now is add another dimension measure with an attribute defaulted to 1 and in the estimated sales, I'll add a 1 to the new records and a 0 to the old ones. This should address the issue that I have.

Thanks!

Monday, February 20, 2012

Overflow error

On Thu, 13 Oct 2005 19:35:16 GMT, Mike wrote:

>I have the SQL table column PRICE set for decimal (14,14).

Hi Mike,

That means that you have a total of 14 digits, 14 of which are to the
right of the decimal. Leaving no digits to the left.

>Any one know why I would get an overflow error.

Probably because there's a value above 1.000 or below -1.000 in your
data.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)I am trying to use the bulk insert command but I am getting the following
error:

--Begin Error Msg--
Server: Msg 4867, Level 16, State 1, Line 1
Bulk insert data conversion error (overflow) for row 1, column 3 (PRICE).
--End Error Msg--

The data is formatted as a number, single with 14 decimal. I am exporting
to csv txt file from MS-Access.

I have the SQL table column PRICE set for decimal (14,14).

Any one know why I would get an overflow error.

Mike
m charney at dunlap hospital dot org|||Ok I fix those now I have the following error

--Begin Error Msg--
Server: Msg 4864, Level 16, State 1, Line 1
Bulk insert data conversion error (type mismatch) for row 1, column 14
(STDCOST).
--End Error Msg--

The STDCOST is set to decimal (28,14) and is a formatted in Access as a
number, single with 14 decimal. I don't know why I would be getting a Type
Mismatch error.

Any idea?

Mike

"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:a3etk1hrq4v7odfrk71r0hli5lfivqebgt@.4ax.com...
> On Thu, 13 Oct 2005 19:35:16 GMT, Mike wrote:
>>I have the SQL table column PRICE set for decimal (14,14).
> Hi Mike,
> That means that you have a total of 14 digits, 14 of which are to the
> right of the decimal. Leaving no digits to the left.
>>Any one know why I would get an overflow error.
> Probably because there's a value above 1.000 or below -1.000 in your
> data.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||On Thu, 13 Oct 2005 20:34:14 GMT, Mike wrote:

>Ok I fix those now I have the following error
>--Begin Error Msg--
>Server: Msg 4864, Level 16, State 1, Line 1
>Bulk insert data conversion error (type mismatch) for row 1, column 14
>(STDCOST).
>--End Error Msg--
>The STDCOST is set to decimal (28,14) and is a formatted in Access as a
>number, single with 14 decimal. I don't know why I would be getting a Type
>Mismatch error.
>Any idea?

Hi Mike,

My guess is that there's some dirty data in the file you're trying to
read in.

You might want to double-check the data. And the format file you used.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||Sorry for the double post. I dont know what I was thinking.

I looked for bad data but I did not see any. With the column being set
to numeric and singal in MS-Access, I dont think there could be any non-
numeric data in the column. The only thought I would have is Null values
being in the column but the SQL table is set to allow Null values.

What exactly is a Type Mismatch error? What does it mean? I thought it
might mean I was trying to input non-numeric data into a numeric column,
but I dont see any.

I am stuck...

Mike Charney

Hugo Kornelis <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in
news:k0jtk1dd153rcs6fvv6eb5le1rukn57qot@.4ax.com:

> On Thu, 13 Oct 2005 20:34:14 GMT, Mike wrote:
>>Ok I fix those now I have the following error
>>
>>--Begin Error Msg--
>>Server: Msg 4864, Level 16, State 1, Line 1
>>Bulk insert data conversion error (type mismatch) for row 1, column 14
>>(STDCOST).
>>--End Error Msg--
>>
>>The STDCOST is set to decimal (28,14) and is a formatted in Access as
>>a number, single with 14 decimal. I don't know why I would be getting
>>a Type Mismatch error.
>>
>>Any idea?
> Hi Mike,
> My guess is that there's some dirty data in the file you're trying to
> read in.
> You might want to double-check the data. And the format file you used.
> Best, Hugo|||On Thu, 13 Oct 2005 23:34:13 GMT, Mike wrote:

>Sorry for the double post. I dont know what I was thinking.
>I looked for bad data but I did not see any. With the column being set
>to numeric and singal in MS-Access, I dont think there could be any non-
>numeric data in the column. The only thought I would have is Null values
>being in the column but the SQL table is set to allow Null values.

Hi Mike,

I see in the other thread that you solved this problem and are now
fighting the next problem. Do post the information Erland requested. I'm
reading that thread as well, so it's quite likely that at least one of
us can spot the problem if you provide something for us to look at.

>What exactly is a Type Mismatch error? What does it mean? I thought it
>might mean I was trying to input non-numeric data into a numeric column,
>but I dont see any.

Yes, that is exactly what it means: the data in the input is not
compatible with the datatype in the table. Someting like "one" in a
numeric column, or "30 feb 2005" in a datetime column.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)