Showing posts with label situation. Show all posts
Showing posts with label situation. Show all posts

Monday, March 12, 2012

Generating a cached copy of the report

Is there a way to programatically determine if a report should be generated from the cache or run against real-time data?

I have a situation wherein I would need to programmatically figure out if the datasource needs to be a cached version or real-time data based on certain options that a user defines (not report parameters)...

any help would be very appreciated.Not before running the report, but afterwards you can look at the ExecutionDate property. Depending on the version of SRS you are using that can be retrieved in different ways.|||

NOt sure what you're trying to do: data sources are not cached in Reporting Services. Report are.

Essentially, if you want cache to work, you need to set the data source to use stored credentials and not use any User!UserID or User!Language references in your report.

The GetCacheOptions method provides caching information on the report itself.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/rsprog/htm/rsp_ref_soapapi_service_ak_5jas.asp

-Lukasz

|||

Yes, I would like to have the report cached with the underlying data in it.

If I were to have it cached per UserID, would having references to User!UserID in the report take care of it?

Sorry if this is a pretty basic question, but I am trying to finalize if this is possible using Reporting services.
Thanks

|||The cache is not per user, it is shared between users (based on parameters). You can use a post query filter on UserID but referenceing the user it in the query doesn't make any sense. All of the caching options are discussed in the online help.|||If the report uses UserID in a post-query filter, the report will get cached, but the filter will be re-executed for each user.|||

I am trying to get around the user global variable to run reports. I have security implemented by User in the reports. i.e. a userID is only allowed to select the following regions in the region parameter,

I also have a hidden parameter that has two values; online and offline. The parameter is defaulted to online when running the reports interactively and for subscriptions to batch. When it is online, I get the current logged on userID using a CLR function. We get around the batch parameters by using a master userID that has access to all the data (all regions) in the subscriptions.

When the subscription is run with batch, will the interactive users pick up the cached report or run it again as the interactive parameter is online? From your reply, the cache that is selected is based on the parameter list for the report.

The real question is caching based on the parameters passed to retrieve from cache or is there a way around it? Or is there a way to determine if a cached version of the report is being run so that I can plug in my master userID?

Thanks

Generating a cached copy of the report

Is there a way to programatically determine if a report should be generated from the cache or run against real-time data?

I have a situation wherein I would need to programmatically figure out if the datasource needs to be a cached version or real-time data based on certain options that a user defines (not report parameters)...

any help would be very appreciated.Not before running the report, but afterwards you can look at the ExecutionDate property. Depending on the version of SRS you are using that can be retrieved in different ways.|||

NOt sure what you're trying to do: data sources are not cached in Reporting Services. Report are.

Essentially, if you want cache to work, you need to set the data source to use stored credentials and not use any User!UserID or User!Language references in your report.

The GetCacheOptions method provides caching information on the report itself.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/rsprog/htm/rsp_ref_soapapi_service_ak_5jas.asp

-Lukasz

|||

Yes, I would like to have the report cached with the underlying data in it.

If I were to have it cached per UserID, would having references to User!UserID in the report take care of it?

Sorry if this is a pretty basic question, but I am trying to finalize if this is possible using Reporting services.
Thanks

|||The cache is not per user, it is shared between users (based on parameters). You can use a post query filter on UserID but referenceing the user it in the query doesn't make any sense. All of the caching options are discussed in the online help.|||If the report uses UserID in a post-query filter, the report will get cached, but the filter will be re-executed for each user.|||

I am trying to get around the user global variable to run reports. I have security implemented by User in the reports. i.e. a userID is only allowed to select the following regions in the region parameter,

I also have a hidden parameter that has two values; online and offline. The parameter is defaulted to online when running the reports interactively and for subscriptions to batch. When it is online, I get the current logged on userID using a CLR function. We get around the batch parameters by using a master userID that has access to all the data (all regions) in the subscriptions.

When the subscription is run with batch, will the interactive users pick up the cached report or run it again as the interactive parameter is online? From your reply, the cache that is selected is based on the parameter list for the report.

The real question is caching based on the parameters passed to retrieve from cache or is there a way around it? Or is there a way to determine if a cached version of the report is being run so that I can plug in my master userID?

Thanks

Sunday, February 19, 2012

General Technique Advice - One Server Updating Another

I have a situation where a web application needs to obtain hourly updates as
to inventory levels from second system. There is also a once-a-day update w
here new catalog products that have been initially entered into the second s
ystem need to be automatica
lly pulled and inserted into the web application where they must be massaged
and made ready for presentation on the web site. Both systems have MS SQL S
erver backends.
My question is ... what is the most common, simplest, most reliable, uses le
ast overhead ... (best) ... way to go about this?
Is this what Subscriptions are for? Stored Procedures?
Thanks for your advice,
Chip DukesChip,
you could use DTS or replication or linked servers.
If DTS, you would have 2 packages, each scheduled.
If replication, then the hourly movement would be transactional, and
once-a-day process would be transactional with a transformable subscription
(which amounts to using DTS behind the scenes anyway). Alternatively, the
second process could be plain transactional and (indexed) views used on the
subscriber to massage the output.
As a third posibility you could use linked servers and scheduled jobs.
My preference would be transactional replication. This has a series of
alerts to maintain notification of the process success/failure, and the
distribution database can easily be queried at any time to see which
transactions are waiting in the 'pipeline', ready to be delivered, so
troubleshooting is not difficult. The only stipulation is that you have
primary keys on your table articles, but I doubt that would be an issue.
HTH,
Paul Ibison