Showing posts with label rows. Show all posts
Showing posts with label rows. Show all posts

Thursday, March 29, 2012

get an arithmetic overflow with count(*)

Doing a select count(*) on a table that I believe has more rows than the max
value of an int gives me an arithmetic overflow.
How can I rewrite my query to find the no. of rows ?Hi
Look up COUNT_BIG in Books online.
John
"Hassan" wrote:

> Doing a select count(*) on a table that I believe has more rows than the m
ax
> value of an int gives me an arithmetic overflow.
> How can I rewrite my query to find the no. of rows ?
>
>

Get a total on rows that are filtered using the Group Filter in a Table

Is it possible to get a sub total for a Top N filter?
I've got a group that contains that contains a top N filter.
This filters properly for example, I see only 3 rows per group.
However, I cannot get a sub total for only those 3 rows. It calculates
all
of the rows with the main group.
How would I get the sum for only those 3 rows that are being displayed
on
the report.hi,
keep a visibility condition for other subtotals. i mean to say to
select each three records you would have taken mod ,right? take
absolute value now for not showing other totals
some thing like this
Rownumber(Isnothing)\3<2 such that it will be invisible for that row
Thank you
Raj Deep.A

Tuesday, March 27, 2012

get 5 rows from 2 unrelated tables (was "Complicated sql query.... need help bad

I am trying to write a query that queries 2 tables and gets back 5 rows. These two tables arent really related and the data I get back is different for either table.
The 5 rows that I want to get back are for the LATEST 5 DATES. The date field for one table (F_INSPECTIONS) is END_DATE and the date field for the other table (F_OCCURRENCES) is OCCURRRENCE_DATE.

I am writing a program to do this so if its absolutely impossible to implement this with sql code then a suggestion of how I might be able to go about doing it antoher way would help.

Table descriptions:

/****** Object: Table [dbo].[F_INSPECTIONS] Script Date: 2/8/2005 10:59:41 AM ******/
CREATE TABLE [dbo].[F_INSPECTIONS] (
[INSPECTION_ID] [varchar] (15) COLLATE SQL_Latin1_General_CP850_CI_AI NOT NULL ,
[INSPECTION_NAME] [varchar] (150) COLLATE SQL_Latin1_General_CP850_CI_AI NULL ,
[CAP_FACILITY_ID] [int] NOT NULL ,
[REG_SURR_ID] [smallint] NOT NULL ,
[START_DATE] [datetime] NULL ,
[END_DATE] [datetime] NULL ,
[INSP_UPDATED_ON] [datetime] NULL ,
[INSP_ORIGIN_ID] [tinyint] NULL ,
[INSP_TYPE_ID] [tinyint] NULL ,
[DAYS_SINCE_LAST] [smallint] NULL ,
[VIOLATION_COUNT] [smallint] NULL ,
[NON_COMPLIANCE_IND] [tinyint] NULL ,
[INSPECTION_COUNT] [smallint] NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[F_OCCURRENCES] (
[OCCURRENCE_ID] [varchar] (15) COLLATE SQL_Latin1_General_CP850_CI_AI NOT NULL ,
[CAP_FACILITY_ID] [int] NOT NULL ,
[OCCURRENCE_NM] [varchar] (150) COLLATE SQL_Latin1_General_CP850_CI_AI NULL ,
[OCCURRENCE_DATE] [datetime] NULL ,
[REG_SURR_ID] [smallint] NOT NULL ,
[REPORTED_DATE] [datetime] NULL ,
[ASSESSMENT_DATE] [datetime] NULL ,
[UPDATED_ON] [datetime] NULL ,
[ORIGIN_ID] [tinyint] NULL ,
[CATEGORY_ID] [tinyint] NULL ,
[OUTCOME_ID] [tinyint] NULL
) ON [PRIMARY]

I need to query from these 2 tables and get these columns back:Are you trying to get back only 5 rows(attempting to join 2 unrelated tables), or 5 rows from each table(candidate for a UNION)?|||If I understood well, you'll need something like this: first "join" similar results from both tables using the UNION, and then select first five of them.SELECT ID, dat
FROM (SELECT ins_id ID, end_date dat
FROM F_INS
UNION
SELECT occ_id ID, occ_date dat
FROM F_OCC
ORDER BY 2)
WHERE ROWNUM <= 5
ORDER BY 2;Selecting "top 5" rows can be done in different ways; this one (using the ROWNUM pseudocolumn) is used in Oracle. I *think* MySQL has "SELECT TOP 5 ..." statement; I don't know which database engine you use (is it Access?), but I hope my example will help you find the result.|||This is exacly the logic I was looking for (top 5 dates out of the entire set) Hopefully a union like this works in SQL SERVER.

Thanks so much.|||My translation of the Oracle syntax into SQL-92 (which should run on MS-SQL 2000) would be:SELECT TOP 5 ID, dat
FROM (
SELECT ins_id ID, end_date dat
FROM F_INS
UNION SELECT occ_id ID, occ_date dat
FROM F_OCC) AS A
ORDER BY 2-PatP|||Here is the statement I wrote using that same logic. It works great however. i cant get the top 5 to work.

(select
F_OCCURRENCES.CAP_FACILITY_ID,
F_OCCURRENCES.REG_SURR_ID as reg_surr_id,
'N/A' as INPECTOR,
'N/A' as COMPLIANCE_STATUS,
'OCCURRENCE' as ACTIVITY,
D_OCCURRENCE_OUTCOME.OUTCOME_ENG_DESC as OUTCOME,
D_REGULATION.REG_ENGLISH_DESC AS REGULATION,
F_OCCURRENCES.OCCURRENCE_DATE as theDATE
from F_OCCURRENCES INNER JOIN D_OCCURRENCE_OUTCOME ON F_OCCURRENCES.OUTCOME_ID = D_OCCURRENCE_OUTCOME.OUTCOME_ID INNER JOIN D_REGULATION ON D_REGULATION.REG_SURR_ID = F_OCCURRENCES.REG_SURR_ID where cap_facility_id = '11518' and F_OCCURRENCES.REG_SURR_ID = '101'
UNION
SELECT
F_INSPECTIONS.CAP_FACILITY_ID,
F_INSPECTIONS.REG_SURR_ID as reg_surr_id,
'NO DATA' as INSPECTOR,
CASE Non_COMPLIANCE_IND WHEN 1 THEN 'Non-Compliant' WHEN 0 THEN 'Compliant' END as COMPLIANCE_STATUS,
'INSPECTION' as ACTIVITY,
DISP_ENGLISH_DESC as OUTCOME,
D_REGULATION.REG_ENGLISH_DESC AS REGULATION,
F_INSPECTIONS.START_DATE as theDATE
FROM F_INSPECTIONS LEFT JOIN F_VIOLATIONS ON F_INSPECTIONS.INSPECTION_ID = F_VIOLATIONS.INSPECTION_ID Left Join D_DISPOSITION on D_DISPOSITION.VIOLATION_DISP_ID = F_VIOLATIONS.VIOLATION_DISP_ID LEFT JOIN D_REGULATION ON F_INSPECTIONS.REG_SURR_ID = D_REGULATION.REG_SURR_ID where F_INSPECTIONS.REG_SURR_ID = '101' and F_INSPECTIONS.CAP_FACILITY_ID = '11518'
)
order by theDate desc|||This is just a "shot in the dark" since I'm not willing to take the time to analyze this SQL at the moment, but if that SQL works then I'd suggest:SELECT TOP 5 *
FROM (
SELECT
F_OCCURRENCES.CAP_FACILITY_ID
, F_OCCURRENCES.REG_SURR_ID as reg_surr_id
, 'N/A' as INPECTOR
, 'N/A' as COMPLIANCE_STATUS
, 'OCCURRENCE' as ACTIVITY
, D_OCCURRENCE_OUTCOME.OUTCOME_ENG_DESC as OUTCOME
, D_REGULATION.REG_ENGLISH_DESC AS REGULATION
, F_OCCURRENCES.OCCURRENCE_DATE as theDATE
FROM F_OCCURRENCES
INNER JOIN D_OCCURRENCE_OUTCOME
ON F_OCCURRENCES.OUTCOME_ID = D_OCCURRENCE_OUTCOME.OUTCOME_ID
INNER JOIN D_REGULATION
ON D_REGULATION.REG_SURR_ID = F_OCCURRENCES.REG_SURR_ID
WHERE cap_facility_id = '11518'
and F_OCCURRENCES.REG_SURR_ID = '101'
UNION SELECT
F_INSPECTIONS.CAP_FACILITY_ID
, F_INSPECTIONS.REG_SURR_ID as reg_surr_id
, 'NO DATA' as INSPECTOR
, CASE Non_COMPLIANCE_IND
WHEN 1 THEN 'Non-Compliant'
WHEN 0 THEN 'Compliant'
END as COMPLIANCE_STATUS
, 'INSPECTION' as ACTIVITY
, DISP_ENGLISH_DESC as OUTCOME
, D_REGULATION.REG_ENGLISH_DESC AS REGULATION
, F_INSPECTIONS.START_DATE as theDATE
FROM F_INSPECTIONS
LEFT JOIN F_VIOLATIONS
ON F_INSPECTIONS.INSPECTION_ID = F_VIOLATIONS.INSPECTION_ID
LEFT JOIN D_DISPOSITION
on D_DISPOSITION.VIOLATION_DISP_ID = F_VIOLATIONS.VIOLATION_DISP_ID
LEFT JOIN D_REGULATION
ON F_INSPECTIONS.REG_SURR_ID = D_REGULATION.REG_SURR_ID
where F_INSPECTIONS.REG_SURR_ID = '101'
and F_INSPECTIONS.CAP_FACILITY_ID = '11518'
) AS z
ORDER BY theDate desc-PatP|||Brilliant!
the "as z" is what made this work.

thanks very much|||That need for an alias bytes a lot of people... Standard SQL-92 isn't always as forgiving as Oracle (or any other vendor) can be, but that's the price you pay for portability.

-PatPsql

Get @@rowcount data from MSSQL using SqlDataSource with delete command

Hi

I'm using a simple SqlDataSource to connect to a stored proc to delete a number of rows from different tables.

In my SQL im using:

DECLARE @.table1CountintDELETE FROM Table1WHERE id = @.new_idSET @.table1Count=@.@.rowcountSELECT @.table1Count

I'm then using an input box and linking it to the delete control parameter. Then on a button click event i'm running SqlDataSource1.Delete() which all works fine. But how do i get the @.table1Count back into my aspx page?

Thanks

use OUT parameter

CREATE PROC dbo.SP_DeleteID(@.new_id int, @.effRowCnt intOUT)

As

begin

DELETE FROM Table1WHERE id = @.new_id

SET @.effRowCnt=@.@.rowcount

end

|||

ok, but how do i reference the @.effRowCnt in the code behind page?

thanks

|||

If you are just trying to get a count of the number of rows deleted, an easier way to do it is to use the Deleted methodof the DataSource, something like:

ProtectedSub SQLDataSource1_Deleted(ByVal senderAsObject,ByVal eAs System.Web.UI.WebControls.SqlDataSourceStatusEventArgs)Handles SQLDataSource1.Deleted

NumberofRowsDeleted = e.AffectedRows

EndSub

HTH

get # of rows affected

I use business logic layer, TableAdapter(middle tier) and StoredProcedure(backend). In my stored procedure,

I set as follows:

set nocount off

What and where should I do in order to get the rows affected count in the business logic layer(C#/VB code)?

Thank you.

Hi

Here is snippet fromBuilding and using a 3-tiered data architecture with ASP.NET 2.0(Creating a Business Logic Layer) :

 [System.ComponentModel.DataObjectMethodAttribute(System.ComponentModel.DataObjectMethodType.Update,true)]public bool UpdateProduct(string productName,int? supplierID,int? categoryID,string quantityPerUnit,decimal? unitPrice,short? unitsInStock,short? unitsOnOrder,short? reorderLevel,bool discontinued,int productID) { Northwind.ProductsDataTable products = Adapter.GetProductByProductID(productID);if (products.Count == 0)// no matching record found, return falsereturn false; Northwind.ProductsRow product = products[0];// Business rule check - cannot discontinue a product that's supplied by only // one supplierif (discontinued) {// Get the products we buy from this supplier Northwind.ProductsDataTable productsBySupplier = Adapter.GetProductsBySupplierID(product.SupplierID);if (productsBySupplier.Count == 1)// this is the only product we buy from this supplierthrow new ApplicationException("You cannot mark a product as discontinued if its the only product purchased from a supplier"); } product.ProductName = productName;if (supplierID ==null) product.SetSupplierIDNull();else product.SupplierID = supplierID.Value;if (categoryID ==null) product.SetCategoryIDNull();else product.CategoryID = categoryID.Value;if (quantityPerUnit ==null) product.SetQuantityPerUnitNull();else product.QuantityPerUnit = quantityPerUnit;if (unitPrice ==null) product.SetUnitPriceNull();else product.UnitPrice = unitPrice.Value;if (unitsInStock ==null) product.SetUnitsInStockNull();else product.UnitsInStock = unitsInStock.Value;if (unitsOnOrder ==null) product.SetUnitsOnOrderNull();else product.UnitsOnOrder = unitsOnOrder.Value;if (reorderLevel ==null) product.SetReorderLevelNull();else product.ReorderLevel = reorderLevel.Value; product.Discontinued = discontinued;// Update the product recordint rowsAffected = Adapter.Update(product);// Return true if precisely one row was updated, otherwise falsereturn rowsAffected == 1; }
|||

@.@.ROWCOUNT will return you no.of records affected in database hence in sp. so you need to return it as output parameter from sp to upper layers.

thanks,

satish.

sql

Monday, March 26, 2012

Generation of report takes a long time (performance issue)

Hi,

My company is using MS SQL Server 2000 Developer Edition and Report Services.
We have some tables which are have now around 4000000 rows. When we genarating report
SELECT sessionNum, moteID, dbo.MacAddrConv(macAddr) AS macAddr, chID, dbo.TimestampConv(timestamp) AS timestamp, value
FROM NotificationData
ORDER BY timestamp
it takes 15 minute. Our table is located in the Server whic has these parametres:

1- Currently there are about 4.0 million entry in the database that uses 1638.44 MB of memory

2- The new SQL server has 3.39 GHz Intel Xeon processor, 3.00 GB of RAM and 192 GB of hard drive.

Why it takes so long time to generate the reports? Could you help us to improve the performance for our Database?

Thanks a lot,

Alice

The query selects 4 million rows. If you just show all the rows in the report with about 40 rows per page, you would really want to generate a report with 100.000 pages?

If however, you just show aggregated values in the report (but not the detail rows), you should rather perform the aggregations inside the query - because the aggregations will be performed much more efficiently inside the database.

-- Robert

|||i'm not impressed with reporting services' scalability

i have reports that are much smaller than yours that take forever to run

figure out a way to make it smaller, it isn't going to handle it nicely|||1st question, does ur report must show so many record at 1 time?i believe that even you can show up but u still having the printing problem.
2nd question, how you layout ur report?

my suggestion would be try to put more parameter or fixed the output record no for each page, and try to do any math calculation in query instead of report. hope can help|||

Generating ridiculously huge reports is a problem with every reporting system I've seen. However, I think there are a couple of options here.

First, use the new query execution plan tools to make sure the query isnt too complicated.|||Changing the report source to a SQL Server stored procedure also can have dramatic performance boosts. This is mainly due to procs being pre-compiled.sql

Generation of report takes a long time (performance issue)

Hi,

My company is using MS SQL Server 2000 Developer Edition and Report Services.
We have some tables which are have now around 4000000 rows. When we genarating report
SELECT sessionNum, moteID, dbo.MacAddrConv(macAddr) AS macAddr, chID, dbo.TimestampConv(timestamp) AS timestamp, value
FROM NotificationData
ORDER BY timestamp
it takes 15 minute. Our table is located in the Server whic has these parametres:

1- Currently there are about 4.0 million entry in the database that uses 1638.44 MB of memory

2- The new SQL server has 3.39 GHz Intel Xeon processor, 3.00 GB of RAM and 192 GB of hard drive.

Why it takes so long time to generate the reports? Could you help us to improve the performance for our Database?

Thanks a lot,

Alice

The query selects 4 million rows. If you just show all the rows in the report with about 40 rows per page, you would really want to generate a report with 100.000 pages?

If however, you just show aggregated values in the report (but not the detail rows), you should rather perform the aggregations inside the query - because the aggregations will be performed much more efficiently inside the database.

-- Robert

|||i'm not impressed with reporting services' scalability

i have reports that are much smaller than yours that take forever to run

figure out a way to make it smaller, it isn't going to handle it nicely|||1st question, does ur report must show so many record at 1 time?i believe that even you can show up but u still having the printing problem.
2nd question, how you layout ur report?

my suggestion would be try to put more parameter or fixed the output record no for each page, and try to do any math calculation in query instead of report. hope can help|||

Generating ridiculously huge reports is a problem with every reporting system I've seen. However, I think there are a couple of options here.

First, use the new query execution plan tools to make sure the query isnt too complicated.|||Changing the report source to a SQL Server stored procedure also can have dramatic performance boosts. This is mainly due to procs being pre-compiled.

Generation of report takes a long time (performance issue)

Hi,

My company is using MS SQL Server 2000 Developer Edition and Report Services.
We have some tables which are have now around 4000000 rows. When we genarating report
SELECT sessionNum, moteID, dbo.MacAddrConv(macAddr) AS macAddr, chID, dbo.TimestampConv(timestamp) AS timestamp, value
FROM NotificationData
ORDER BY timestamp
it takes 15 minute. Our table is located in the Server whic has these parametres:

1- Currently there are about 4.0 million entry in the database that uses 1638.44 MB of memory

2- The new SQL server has 3.39 GHz Intel Xeon processor, 3.00 GB of RAM and 192 GB of hard drive.

Why it takes so long time to generate the reports? Could you help us to improve the performance for our Database?

Thanks a lot,

Alice

The query selects 4 million rows. If you just show all the rows in the report with about 40 rows per page, you would really want to generate a report with 100.000 pages?

If however, you just show aggregated values in the report (but not the detail rows), you should rather perform the aggregations inside the query - because the aggregations will be performed much more efficiently inside the database.

-- Robert

|||i'm not impressed with reporting services' scalability

i have reports that are much smaller than yours that take forever to run

figure out a way to make it smaller, it isn't going to handle it nicely|||1st question, does ur report must show so many record at 1 time?i believe that even you can show up but u still having the printing problem.
2nd question, how you layout ur report?

my suggestion would be try to put more parameter or fixed the output record no for each page, and try to do any math calculation in query instead of report. hope can help|||

Generating ridiculously huge reports is a problem with every reporting system I've seen. However, I think there are a couple of options here.

First, use the new query execution plan tools to make sure the query isnt too complicated.|||Changing the report source to a SQL Server stored procedure also can have dramatic performance boosts. This is mainly due to procs being pre-compiled.

Generation of estimated execution plan takes a long time

Hello,
I am puzzled by following problem. One of our tables has more than 12
billion rows. Usually, query plan and query itself is executed almost
immediately but occasionally, it take about 2 minutes to generate estimated
query execution plan for following query:
SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
There is a PK clustered index defined on qwEventIx field. I do not observer
any blocking on this object when it takes a long time to generate estimated
query plan. There is a plan cached in syscacheobjects for this query.
I was wondering if anyone can explain why it takes such a long time to
generate estimated query plan on some occasions.
Thanks,
IgorIgor,
Are the statistics current and up-to-date?
HTH
Jerry
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
> Hello,
> I am puzzled by following problem. One of our tables has more than 12
> billion rows. Usually, query plan and query itself is executed almost
> immediately but occasionally, it take about 2 minutes to generate
> estimated query execution plan for following query:
> SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
> There is a PK clustered index defined on qwEventIx field. I do not
> observer any blocking on this object when it takes a long time to
> generate estimated query plan. There is a plan cached in syscacheobjects
> for this query.
> I was wondering if anyone can explain why it takes such a long time to
> generate estimated query plan on some occasions.
> Thanks,
> Igor
>|||It could be that this is when auto-update of statistics occurs. You can catc
h that in Profiler using
Object, Auto Stats.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
> Hello,
> I am puzzled by following problem. One of our tables has more than 12 bill
ion rows. Usually, query
> plan and query itself is executed almost immediately but occasionally, it
take about 2 minutes to
> generate estimated query execution plan for following query:
> SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
> There is a PK clustered index defined on qwEventIx field. I do not observe
r any blocking on this
> object when it takes a long time to generate estimated query plan. There
is a plan cached in
> syscacheobjects for this query.
> I was wondering if anyone can explain why it takes such a long time to gen
erate estimated query
> plan on some occasions.
> Thanks,
> Igor
>|||Tibor,
You were absolutely correct! SQL Server was updating statistics on 12
billion rows table. It was taking over 2 minutes. I suppose it might make
sense to disable AUTOSTATS on this table with sp_autostats and update
statistics manually during off-peak hours. Would you recommend this
approach?
Thanks,
Igor
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%2311fWFg2FHA.956@.TK2MSFTNGP10.phx.gbl...
> It could be that this is when auto-update of statistics occurs. You can
> catch that in Profiler using Object, Auto Stats.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "imarchenko" <igormarchenko@.hotmail.com> wrote in message
> news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
>|||Jerry,
It is indeed statistics related.
Thanks,
Igor
"Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
news:uQmyqVa2FHA.2364@.TK2MSFTNGP12.phx.gbl...
> Igor,
> Are the statistics current and up-to-date?
> HTH
> Jerry
> "imarchenko" <igormarchenko@.hotmail.com> wrote in message
> news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
>|||Yes, with that long auto.stats time, disabling autostats for that table is r
easonable. Make sure you
schedule manual update statistics with a reasonable sample and at a good tim
e (like not before some
big batch, but after :-) ).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:e1KGDu%232FHA.3420@.TK2MSFTNGP15.phx.gbl...
> Tibor,
> You were absolutely correct! SQL Server was updating statistics on 12 b
illion rows table. It
> was taking over 2 minutes. I suppose it might make sense to disable AUTOST
ATS on this table with
> sp_autostats and update statistics manually during off-peak hours. Would y
ou recommend this
> approach?
> Thanks,
> Igor
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote i
n message
> news:%2311fWFg2FHA.956@.TK2MSFTNGP10.phx.gbl...
>

Generation of estimated execution plan takes a long time

Hello,
I am puzzled by following problem. One of our tables has more than 12
billion rows. Usually, query plan and query itself is executed almost
immediately but occasionally, it take about 2 minutes to generate estimated
query execution plan for following query:
SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
There is a PK clustered index defined on qwEventIx field. I do not observer
any blocking on this object when it takes a long time to generate estimated
query plan. There is a plan cached in syscacheobjects for this query.
I was wondering if anyone can explain why it takes such a long time to
generate estimated query plan on some occasions.
Thanks,
Igor
Igor,
Are the statistics current and up-to-date?
HTH
Jerry
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
> Hello,
> I am puzzled by following problem. One of our tables has more than 12
> billion rows. Usually, query plan and query itself is executed almost
> immediately but occasionally, it take about 2 minutes to generate
> estimated query execution plan for following query:
> SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
> There is a PK clustered index defined on qwEventIx field. I do not
> observer any blocking on this object when it takes a long time to
> generate estimated query plan. There is a plan cached in syscacheobjects
> for this query.
> I was wondering if anyone can explain why it takes such a long time to
> generate estimated query plan on some occasions.
> Thanks,
> Igor
>
|||It could be that this is when auto-update of statistics occurs. You can catch that in Profiler using
Object, Auto Stats.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
> Hello,
> I am puzzled by following problem. One of our tables has more than 12 billion rows. Usually, query
> plan and query itself is executed almost immediately but occasionally, it take about 2 minutes to
> generate estimated query execution plan for following query:
> SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
> There is a PK clustered index defined on qwEventIx field. I do not observer any blocking on this
> object when it takes a long time to generate estimated query plan. There is a plan cached in
> syscacheobjects for this query.
> I was wondering if anyone can explain why it takes such a long time to generate estimated query
> plan on some occasions.
> Thanks,
> Igor
>
|||Tibor,
You were absolutely correct! SQL Server was updating statistics on 12
billion rows table. It was taking over 2 minutes. I suppose it might make
sense to disable AUTOSTATS on this table with sp_autostats and update
statistics manually during off-peak hours. Would you recommend this
approach?
Thanks,
Igor
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%2311fWFg2FHA.956@.TK2MSFTNGP10.phx.gbl...
> It could be that this is when auto-update of statistics occurs. You can
> catch that in Profiler using Object, Auto Stats.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "imarchenko" <igormarchenko@.hotmail.com> wrote in message
> news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
>
|||Jerry,
It is indeed statistics related.
Thanks,
Igor
"Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
news:uQmyqVa2FHA.2364@.TK2MSFTNGP12.phx.gbl...
> Igor,
> Are the statistics current and up-to-date?
> HTH
> Jerry
> "imarchenko" <igormarchenko@.hotmail.com> wrote in message
> news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
>
|||Yes, with that long auto.stats time, disabling autostats for that table is reasonable. Make sure you
schedule manual update statistics with a reasonable sample and at a good time (like not before some
big batch, but after :-) ).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:e1KGDu%232FHA.3420@.TK2MSFTNGP15.phx.gbl...
> Tibor,
> You were absolutely correct! SQL Server was updating statistics on 12 billion rows table. It
> was taking over 2 minutes. I suppose it might make sense to disable AUTOSTATS on this table with
> sp_autostats and update statistics manually during off-peak hours. Would you recommend this
> approach?
> Thanks,
> Igor
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message
> news:%2311fWFg2FHA.956@.TK2MSFTNGP10.phx.gbl...
>

Generation of estimated execution plan takes a long time

Hello,
I am puzzled by following problem. One of our tables has more than 12
billion rows. Usually, query plan and query itself is executed almost
immediately but occasionally, it take about 2 minutes to generate estimated
query execution plan for following query:
SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
There is a PK clustered index defined on qwEventIx field. I do not observer
any blocking on this object when it takes a long time to generate estimated
query plan. There is a plan cached in syscacheobjects for this query.
I was wondering if anyone can explain why it takes such a long time to
generate estimated query plan on some occasions.
Thanks,
IgorIgor,
Are the statistics current and up-to-date?
HTH
Jerry
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
> Hello,
> I am puzzled by following problem. One of our tables has more than 12
> billion rows. Usually, query plan and query itself is executed almost
> immediately but occasionally, it take about 2 minutes to generate
> estimated query execution plan for following query:
> SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
> There is a PK clustered index defined on qwEventIx field. I do not
> observer any blocking on this object when it takes a long time to
> generate estimated query plan. There is a plan cached in syscacheobjects
> for this query.
> I was wondering if anyone can explain why it takes such a long time to
> generate estimated query plan on some occasions.
> Thanks,
> Igor
>|||It could be that this is when auto-update of statistics occurs. You can catch that in Profiler using
Object, Auto Stats.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
> Hello,
> I am puzzled by following problem. One of our tables has more than 12 billion rows. Usually, query
> plan and query itself is executed almost immediately but occasionally, it take about 2 minutes to
> generate estimated query execution plan for following query:
> SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
> There is a PK clustered index defined on qwEventIx field. I do not observer any blocking on this
> object when it takes a long time to generate estimated query plan. There is a plan cached in
> syscacheobjects for this query.
> I was wondering if anyone can explain why it takes such a long time to generate estimated query
> plan on some occasions.
> Thanks,
> Igor
>|||Tibor,
You were absolutely correct! SQL Server was updating statistics on 12
billion rows table. It was taking over 2 minutes. I suppose it might make
sense to disable AUTOSTATS on this table with sp_autostats and update
statistics manually during off-peak hours. Would you recommend this
approach?
Thanks,
Igor
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%2311fWFg2FHA.956@.TK2MSFTNGP10.phx.gbl...
> It could be that this is when auto-update of statistics occurs. You can
> catch that in Profiler using Object, Auto Stats.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "imarchenko" <igormarchenko@.hotmail.com> wrote in message
> news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
>> Hello,
>> I am puzzled by following problem. One of our tables has more than 12
>> billion rows. Usually, query plan and query itself is executed almost
>> immediately but occasionally, it take about 2 minutes to generate
>> estimated query execution plan for following query:
>> SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
>> There is a PK clustered index defined on qwEventIx field. I do not
>> observer any blocking on this object when it takes a long time to
>> generate estimated query plan. There is a plan cached in syscacheobjects
>> for this query.
>> I was wondering if anyone can explain why it takes such a long time to
>> generate estimated query plan on some occasions.
>> Thanks,
>> Igor
>|||Jerry,
It is indeed statistics related.
Thanks,
Igor
"Jerry Spivey" <jspivey@.vestas-awt.com> wrote in message
news:uQmyqVa2FHA.2364@.TK2MSFTNGP12.phx.gbl...
> Igor,
> Are the statistics current and up-to-date?
> HTH
> Jerry
> "imarchenko" <igormarchenko@.hotmail.com> wrote in message
> news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
>> Hello,
>> I am puzzled by following problem. One of our tables has more than 12
>> billion rows. Usually, query plan and query itself is executed almost
>> immediately but occasionally, it take about 2 minutes to generate
>> estimated query execution plan for following query:
>> SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
>> There is a PK clustered index defined on qwEventIx field. I do not
>> observer any blocking on this object when it takes a long time to
>> generate estimated query plan. There is a plan cached in syscacheobjects
>> for this query.
>> I was wondering if anyone can explain why it takes such a long time to
>> generate estimated query plan on some occasions.
>> Thanks,
>> Igor
>|||Yes, with that long auto.stats time, disabling autostats for that table is reasonable. Make sure you
schedule manual update statistics with a reasonable sample and at a good time (like not before some
big batch, but after :-) ).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:e1KGDu%232FHA.3420@.TK2MSFTNGP15.phx.gbl...
> Tibor,
> You were absolutely correct! SQL Server was updating statistics on 12 billion rows table. It
> was taking over 2 minutes. I suppose it might make sense to disable AUTOSTATS on this table with
> sp_autostats and update statistics manually during off-peak hours. Would you recommend this
> approach?
> Thanks,
> Igor
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message
> news:%2311fWFg2FHA.956@.TK2MSFTNGP10.phx.gbl...
>> It could be that this is when auto-update of statistics occurs. You can catch that in Profiler
>> using Object, Auto Stats.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>> Blog: http://solidqualitylearning.com/blogs/tibor/
>>
>> "imarchenko" <igormarchenko@.hotmail.com> wrote in message
>> news:uIwoFzZ2FHA.4076@.TK2MSFTNGP09.phx.gbl...
>> Hello,
>> I am puzzled by following problem. One of our tables has more than 12 billion rows. Usually,
>> query plan and query itself is executed almost immediately but occasionally, it take about 2
>> minutes to generate estimated query execution plan for following query:
>> SELECT MAX(qwEventIx) FROM FTE_STREAM_EVENT (NOLOCK)
>> There is a PK clustered index defined on qwEventIx field. I do not observer any blocking on this
>> object when it takes a long time to generate estimated query plan. There is a plan cached in
>> syscacheobjects for this query.
>> I was wondering if anyone can explain why it takes such a long time to generate estimated query
>> plan on some occasions.
>> Thanks,
>> Igor
>>
>sql

Friday, March 23, 2012

generating seq no with in a main column

Hi all,
please see if some one can suggest a better approach for this.
i need to move 5 million rows from one table to another.
LoanID is the only clumn in Source table
Source table structure
=========-->
==============================
loanID
1
1
2
2
2
3
4
=====================================
target or detination table should look like this
=========================================
loanID Sqeuence_number_with_in_LoanID
1 1
1 2
2 1
2 2
2 3
3 1
4 1
=========================================

this is what i have tried so far
1)there are about 4 & 1/2 million out of 5 million which will appear
only once (this is one tine load) in source table.
so if i do this
----
sleect loanID
from SourceTable
group by loanID
having count(*)=1

and then i will hard code Sqeuence_number_with_in_LoanID to 1 .because
i know they will only come once.this is set based operation so this is
fast.

2)now,problem is with remainder of the 1/2 million rows.
i had to use cursor for this which is very-very slow.
other option i was thinking of is to pull it in the front end
(.net)and do it there.i am hoping that will be faster.
Database i am working on is sql server 2000.
if some could suggest me some better approach of handling this
remainder
1/2 milion rows.
thanks in advance
-siddYour target table doesn't look very meaningful as you've described it. If
you're not recording any extra information on each row then what's the point
of populating the table with lots of redundant data? Much more efficient
just to add a count column:

CREATE TABLE TargetTable (loanid INTEGER PRIMARY KEY, loancount INTEGER NOT
NULL CHECK (loancount>0))

INSERT INTO TargetTable (loanid, loancount)
SELECT loanid, COUNT(*)
FROM SourceTable
GROUP BY loanid

--
David Portas
----
Please reply only to the newsgroup
--|||Below is one method.

--create table of sequence numbers
DECLARE @.MaxCount int
SELECT @.MaxCount = MAX(loanCount)
FROM(
SELECT COUNT(*) AS loanCount
FROM SourceTable
GROUP BY loanID
) AS loanCounts
CREATE TABLE #SequenceNumbers
(
Sequence_number_with_in_LoanID int NOT NULL
PRIMARY KEY
)
WHILE @.MaxCount > 0
BEGIN
INSERT INTO #SequenceNumbers VALUES(@.MaxCount)
SET @.MaxCount = @.MaxCount - 1
END

-- load target table
SELECT loanID, Sequence_number_with_in_LoanID
INTO TargetTable
FROM (
SELECT loanID, COUNT(*) AS loanCount
FROM SourceTable
GROUP BY loanID) AS SourceTableCounts
JOIN #SequenceNumbers ON
#SequenceNumbers.Sequence_number_with_in_LoanID <=
SourceTableCounts.loanCount
DROP TABLE #SequenceNumbers
GO

--
Hope this helps.

Dan Guzman
SQL Server MVP

"sidd" <siddharthkhare@.hotmail.com> wrote in message
news:af96a2d7.0312270944.2dd0ca8f@.posting.google.c om...
> Hi all,
> please see if some one can suggest a better approach for this.
> i need to move 5 million rows from one table to another.
> LoanID is the only clumn in Source table
> Source table structure
> =========-->
> ==============================
> loanID
> 1
> 1
> 2
> 2
> 2
> 3
> 4
> =====================================
> target or detination table should look like this
> =========================================
> loanID Sqeuence_number_with_in_LoanID
> 1 1
> 1 2
> 2 1
> 2 2
> 2 3
> 3 1
> 4 1
> =========================================
> this is what i have tried so far
> 1)there are about 4 & 1/2 million out of 5 million which will appear
> only once (this is one tine load) in source table.
> so if i do this
> ----
> sleect loanID
> from SourceTable
> group by loanID
> having count(*)=1
> and then i will hard code Sqeuence_number_with_in_LoanID to 1 .because
> i know they will only come once.this is set based operation so this is
> fast.
> 2)now,problem is with remainder of the 1/2 million rows.
> i had to use cursor for this which is very-very slow.
> other option i was thinking of is to pull it in the front end
> (.net)and do it there.i am hoping that will be faster.
> Database i am working on is sql server 2000.
> if some could suggest me some better approach of handling this
> remainder
> 1/2 milion rows.
> thanks in advance
> -sidd|||>> LoanID is the only column in Source table <<

Then this is not a table by definition. Please post DDL in the
future, so we do not have to make guesses. Build a table of
sequential numbers; this is a standard SQL programming trick.

CREATE TABLE Foobar (loan_id, seq_nbr)
SELECT DISTINCT S1.loan_id, seq
FROM Source S1, Sequence
WHERE seq <= (SELECT COUNT(*)
FROM SOurce AS S2
WHERE S1.loan_id = S2.loan_id);

But this is not a good design.|||Dan,
it works great!!
thanks
siddsql

Wednesday, March 21, 2012

Generating reports on very large and "live" databases

Hello All,
I am working on a application which has a table with 100 million
records.
A background process keeps inserting 5 rows per second into this
table.
My client is generating reports from that table. They are interested
in seeing reports with real time data. This means that as soon as a
record is inserted into the table, the reports should process that
record as well.
At the max around 20K records are processed to generate a report. When
this happens it takes simply 30 min to generate a report. Upon
analysis I find that since the table is so huge, the indexes on those
table is also very huge.
What can I do to fix this, I have thought about denormalizing the
table. But the some programmers say that picking up data from one
table is better because doing joins on multiple tables will be even
slower.
Another approach is to bring in a data warehouse but I don't know much
about this except what I learnt in MSDN session about creating of data
cubes. But I suppose cube can be created only when data is static. but
in my case new records are inserted every second and they are to be
included in the report.
The 3rd approach is that I create report specific tables and create a
trigger (or a C programm which polls for changes in main table) and
every time new records are inserted into the main table, I preprocess
them. Then when the users make a request for the report I generate my
report from the preprossed table.
But I feel the trigger will be fired to many times and if the number
of reports are significant (> 35) then trigger/C program could become
a bottleneck itself.
What should I do? it is such a tricky problem.
Please help me and give me some advice. Thank you for your help.
regards,
Abhishektable size (or row count) is irrelevent unless a
table/index scan is involved.
5 rows/sec insert is a negligible load on the system.
however, a report that involves 20k rows with good indexes
should not take 30min to run.
1) what is the query,
2) what are the indexes on this table
3) what does the execution plan show?
(indexes used, type of operation, rows involved, costs)
>--Original Message--
>Hello All,
>I am working on a application which has a table with 100
million
>records.
>A background process keeps inserting 5 rows per second
into this
>table.
>My client is generating reports from that table. They are
interested
>in seeing reports with real time data. This means that as
soon as a
>record is inserted into the table, the reports should
process that
>record as well.
>At the max around 20K records are processed to generate a
report. When
>this happens it takes simply 30 min to generate a report.
Upon
>analysis I find that since the table is so huge, the
indexes on those
>table is also very huge.
>What can I do to fix this, I have thought about
denormalizing the
>table. But the some programmers say that picking up data
from one
>table is better because doing joins on multiple tables
will be even
>slower.
>Another approach is to bring in a data warehouse but I
don't know much
>about this except what I learnt in MSDN session about
creating of data
>cubes. But I suppose cube can be created only when data
is static. but
>in my case new records are inserted every second and they
are to be
>included in the report.
>The 3rd approach is that I create report specific tables
and create a
>trigger (or a C programm which polls for changes in main
table) and
>every time new records are inserted into the main table,
I preprocess
>them. Then when the users make a request for the report
I generate my
>report from the preprossed table.
>But I feel the trigger will be fired to many times and if
the number
>of reports are significant (> 35) then trigger/C program
could become
>a bottleneck itself.
>What should I do? it is such a tricky problem.
>Please help me and give me some advice. Thank you for
your help.
>regards,
>Abhishek
>.
>

Monday, March 19, 2012

generating insert scripts

Hi,

I have a table with 25 columns and 100 rows of data. I want to generate Insert scripts with data. Sql 2005 does not provide any option out of the box to do so.

Has anyone tried this/ used any other tool for the same.

Please share the solution.

Thanks.

hi,
you can use a third party tool.. there's a lot of them.. or you just write a query something like

select 'insert into tblTable (col1, col2, col3.......)'
select 'select '''+ col1 + ''',''' + col2 + ''''.......... + ' union all' from tblTable

- clintz|||

you can use this script:

declare @.Columns varchar(5000),@.TableName varchar(50),@.Condition varchar(500), @.ColumnVal varchar(8000)
set @.TableName='table1'
set @.Condition='1=1'
set @.Columns=''
set @.ColumnVal=''
select @.Columns =@.Columns + c.Column_name + ',' ,@.ColumnVal=@.ColumnVal +
case c.data_type
when 'varchar' then '''''''+'
when 'datetime' then '''''''+ cast('
else '+ cast(' end +
c.Column_name +
case c.data_type
when 'varchar' then '+ '''''
when 'datetime' then ' as varchar(50)) + '''''
else ' as varchar(50)) +' end + ''','
from INFORMATION_SCHEMA.columns c where c.table_name=@.TableName
set @.Columns= substring(@.Columns,1,len(@.Columns)-1)
set @.ColumnVal= substring(@.ColumnVal,1,len(@.ColumnVal)-1)

exec( 'select ''insert into ' + @.TableName + '('+ @.Columns + ') values (''' + @.ColumnVal + ')'' from ' + @.TableName)

please note that all datatypes are not handled . you will have to modify the case part for other datatypes

|||

You can use one of these excellent options:

DDL -Script Database to File
http://www.wardyit.com/blog/blog/archive/2006/07/21/133.aspx
http://www.sqlteam.com/publish/scriptio
http://www.aspfaq.com/etiquette.asp?id=5006
http://www.codeproject.com/dotnet/ScriptDatabase.asp
http://www.nigelrivett.net/DMO/DMOScriptAllDatabases.html
http://rac4sql.net/qalite_main.asp

DDL –Script Data to file (Database Publishing Wizard)
http://www.microsoft.com/downloads/details.aspx?FamilyID=56e5b1c5-bf17-42e0-a410-371a838e570a&DisplayLang=en

|||Thanks Arnie, clintz, mobin

Generating and directing new rows in PostExecute.

I want to construct a dataset based on all rows passed in from a source and, upon running through all of them, push each dataset record to an output.

Unfortunately, upon calling the AddRow method, I get an "object reference not set to an instance of an object" error. Am I not allowed to create new rows in PostExecute?

Bill,

Can you post your code, indicate which line the error occurs on, and tell us how you have configured the outputs of the component.

-Jamie

|||It errors on reference to a row buffer.

With dsMaterialBuffer
.AddRow()
'Code for row values
End With

I've added the following to check if it's actually defined within PostExecute;
If dsMaterialBuffer Is Nothing Then MsgBox("dsMaterialBuffer is nothing.") 'My row buffer

dsMaterialBuffer, which is the row buffer I'm using, evaluates to "Nothing" and that is where my object reference errors are coming from.

Is it possible that the row buffers are being thrown out once PostExecute hits?

The outputs, by the way, are non-synchronous with standard columns.|||

What is the name of the output as defined in the inputs and outputs tab of the script component editor?

-Jamie

|||dsMaterial|||

Hmm, strange. if you want, drop me an email via here: http://blogs.conchango.com/jamiethomson/contact.aspx and I'll reply so you can send me the package. I can take a look and see if anything jumps out.

If you can build the package so that I am able to run it as well (i.e. not reliant on external data sources, just use a script source component instead) then that'd help.

-Jamie

Generating and directing new rows in PostExecute.

I want to construct a dataset based on all rows passed in from a source and, upon running through all of them, push each dataset record to an output.

Unfortunately, upon calling the AddRow method, I get an "object reference not set to an instance of an object" error. Am I not allowed to create new rows in PostExecute?

Bill,

Can you post your code, indicate which line the error occurs on, and tell us how you have configured the outputs of the component.

-Jamie

|||It errors on reference to a row buffer.

With dsMaterialBuffer
.AddRow()
'Code for row values
End With

I've added the following to check if it's actually defined within PostExecute;
If dsMaterialBuffer Is Nothing Then MsgBox("dsMaterialBuffer is nothing.") 'My row buffer

dsMaterialBuffer, which is the row buffer I'm using, evaluates to "Nothing" and that is where my object reference errors are coming from.

Is it possible that the row buffers are being thrown out once PostExecute hits?

The outputs, by the way, are non-synchronous with standard columns.|||

What is the name of the output as defined in the inputs and outputs tab of the script component editor?

-Jamie

|||dsMaterial|||

Hmm, strange. if you want, drop me an email via here: http://blogs.conchango.com/jamiethomson/contact.aspx and I'll reply so you can send me the package. I can take a look and see if anything jumps out.

If you can build the package so that I am able to run it as well (i.e. not reliant on external data sources, just use a script source component instead) then that'd help.

-Jamie

Friday, March 9, 2012

generate three rows out of one row from a table

i've two tables as shown below, i want to generate the table2 using table1,
more details -- for each in table1 i want three rows in table2 these three
rows col2 says A, B, C
table1
--
c1
--
1
2
3
table2
--
c1 c2
-- --
1 A
1 B
1 C
2 A
2 B
2 C
3 A
3 B
3 C
thanks in advanceUse a cartesian product -- CROSS JOIN in SQL.
SELECT c1, c2
FROM table1
CROSS JOIN ( SELECT 'A' UNION
SELECT 'B' UNION
SELECT 'C' ) D ( c2 ) ;
Anith

Sunday, February 26, 2012

generate report very slow?

I have posted about slow performance on report
now i create the secondary filegroup(2nd harddisk)--Account Ledge table
and put one table(684846 rows of data) from primary to secondary filegroup
but seems not improved a lot
any other idea '?You should check your query and indexes first. Maybe you are ordering
according to some expression that is not supported by an index? Maybe your
Where and Join expressions are not supported by indexes? Just by the info
you gave us it is hard to tell where the problem lies. You should include
DDL statements (Create table etc.), sample data and query you are using.
--
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"izumi" <test@.test.com> wrote in message
news:ejom%23G2KEHA.1612@.TK2MSFTNGP12.phx.gbl...
> I have posted about slow performance on report
> now i create the secondary filegroup(2nd harddisk)--Account Ledge table
> and put one table(684846 rows of data) from primary to secondary filegroup
> but seems not improved a lot
> any other idea '?
>

generate report very slow?

I have posted about slow performance on report
now i create the secondary filegroup(2nd harddisk)--Account Ledge table
and put one table(684846 rows of data) from primary to secondary filegroup
but seems not improved a lot
any other idea '?You should check your query and indexes first. Maybe you are ordering
according to some expression that is not supported by an index? Maybe your
Where and Join expressions are not supported by indexes? Just by the info
you gave us it is hard to tell where the problem lies. You should include
DDL statements (Create table etc.), sample data and query you are using.
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"izumi" <test@.test.com> wrote in message
news:ejom%23G2KEHA.1612@.TK2MSFTNGP12.phx.gbl...
> I have posted about slow performance on report
> now i create the secondary filegroup(2nd harddisk)--Account Ledge table
> and put one table(684846 rows of data) from primary to secondary filegroup
> but seems not improved a lot
> any other idea '?
>

generate report very slow?

I have posted about slow performance on report
now i create the secondary filegroup(2nd harddisk)--Account Ledge table
and put one table(684846 rows of data) from primary to secondary filegroup
but seems not improved a lot
any other idea ??
You should check your query and indexes first. Maybe you are ordering
according to some expression that is not supported by an index? Maybe your
Where and Join expressions are not supported by indexes? Just by the info
you gave us it is hard to tell where the problem lies. You should include
DDL statements (Create table etc.), sample data and query you are using.
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"izumi" <test@.test.com> wrote in message
news:ejom%23G2KEHA.1612@.TK2MSFTNGP12.phx.gbl...
> I have posted about slow performance on report
> now i create the secondary filegroup(2nd harddisk)--Account Ledge table
> and put one table(684846 rows of data) from primary to secondary filegroup
> but seems not improved a lot
> any other idea ??
>