Microsoft PRO

Microsoft 70-452 Free Dumps,100% Real Microsoft 70-452 Exam Questions Will Be More Popular

Because Microsoft 70-452 exam has changed recently,Flydumps presents the new version of Microsoft 70-452 practice test, which helps candidates to pass the Microsoft 70-452 exam easily. The exam dumps covers all aspect of Microsoft 70-452 exam. You can visit our website to free Microsoft 70-452 download the New Version VCE Player.

Question No : 1
You design a Business Intelligence (BI) solution by using SQL Server 2008.

You plan to create a SQL Server 2008 Reporting Services (SSRS) solution that contains
five sales dashboard reports.
Users must be able to manipulate the reports’ parameters to analyze data.
You need to ensure that the following requirements are met:
Which two tasks should you perform?
(Each correct answer presents part of the solution. Choose two.)

A. Filter data by using expressions.
B. Specify the default values for each parameter.
C. Create an available values list for each parameter.
D. Create report parameters by using query parameters to filter data at the data source.
Answer: A,B

Explanation:
Question No : 2
You design a SQL Server 2008 Reporting Services (SSRS) solution. You create a report by using Microsoft Visual Studio .NET 2008.
The report contains the following components:
You need to ensure that a summary of sales transactions is displayed for each customer
after the customer details.

Which component should you add to the report?

A. List
B. Table
C. Matrix
D. Subreport
Answer: D

Explanation:
http://msdn.microsoft.com/en-us/library/ms160348(SQL.100).aspx How to: Add a Subreport and Parameters (Reporting Services) Add subreports to a report when you want to create a main report that is a container for multiple related reports. A subreport is a reference to another report. To relate the reports through data values (for example, to have multiple reports show data for the same customer), you must design a parameterized report (for example, a report that shows the details for a specific customer) as the subreport. When you add a subreport to the main report, you can specify parameters to pass to the subreport. You can also add subreports to dynamic rows or columns in a table or matrix. When the main report is processed, the subreport is processed for each row. In this case, consider whether you can achieve the desired effect by using data regions or nested data regions.

Question No : 3
You design a Business Intelligence (BI) solution by using SQL Server 2008.
The solution includes a SQL Server 2008 Analysis Services (SSAS) database. The database contains a data mining structure that uses a SQL Server 2008 table as a data source. A table named OrderDetails contains detailed information on product sales. The OrderDetails table includes a column named Markup.
You build a data mining model by using the Microsoft Decision Trees algorithm. You classify Markup as discretized content.
The algorithm produces a large number of branches for Markup and results in low confidence ratings on predictable columns.
You need to verify whether the Markup values include inaccurate data.
What should you do?
A. Modify the content type of Markup as Continuous.
B. Create a data mining dimension in the SSAS database from OrderDetails.
C. Create a data profile by using SQL Server 2008 Integration Services (SSIS).
D. Create a cube in SSAS. Use OrderDetails as a measure group. Recreate the data mining structure and mining model from the cube data.
Answer: C

Explanation:
Discretized The column has continuous values that are grouped into buckets. Each bucket is considered to have a specific order and to contain discrete values. Possible values for discretization method are automatic, equal areas, or clusters. Automatic means that SSAS determines which method to use. Equal areas results in the input data being divided into partitions of equal size. This method works best with data with regularly distributedvalues. Clusters means that SSAS samples the data to produce a result that accounts for ※clumps§ of data values. Because of this sampling, Clusters can be used only with numeric input columns. You can use the date, double, long, or text data type with the Discretized content type.
Microsoft Decision Trees Algorithm Microsoft Decision Trees is probably the most commonly used algorithm, in part because of its flexibility〞 decision trees work with both discrete and continuous attributes〞and also because of the richness of its included viewers. It*s quite easy to understand the output via these viewers. This algorithm is used to both view and to predict. It is also used (usually in conjunction with the Microsoft Clustering algorithm) to find deviant values. The Microsoft Decision Trees algorithm processes input data by splitting it into recursive (related) subsets. In the default viewer, the output is shown as a recursive tree structure. If you are using discrete data, the algorithm identifies the particular inputs that are most closely correlated with particular predictable values, producing a result that shows which columns are most strongly predictive of a selected attribute. If you are using continuous data, the algorithm uses standard linear regression to determine where the splits in the decision tree occur. Clicking a node displays detailed information in the Mining Legend window. You can configure the view using the various drop-down lists at the top of the viewer, such as Tree, Default Expansion, and so on. Finally, if you*ve enabled drillthrough on your model, you can display the drillthrough information〞either columns from the model or (new to SQL Server 2008) columns from the mining structure, whether or not they are included in this model.
Data Profiling The control flow Data Profiling task relates to business problems that are particularly prominent in BI projects: how to deal with huge quantities of data and what to do when this data originates from disparate sources. Understanding source data quality in BI projects〞when scoping, early in prototyping, and during package development〞is critical when estimating the work involved in building the ETL processes to populate the OLAP cubes and data mining structures. It*s common to underestimate the amount of work involved in cleaning the source data before it is loaded into the SSAS destination structures.The Data Profiling task helps you to understand the scope of the source-data cleanup involved in your projects. Specifically, this cleanup involves deciding which methods to use to clean up your data. Methods can include the use of advanced package transformations (such as fuzzy logic) or more staging areas (relational tables) so that fewer in-memory transformations are necessary during the transformation processes. Other considerations include total number of tasks in a single package, or overall package size.

Question No : 4
You design a Business Intelligence (BI) solution by using SQL Server 2008.
The solution contains a SQL Server 2008 Analysis Services (SSAS) database. A measure group in the database contains log entries of manufacturing events. These events include accidents, machine failures, production capacity metrics, and other activities.
You need to implement a data mining model that meets the following requirements:
Which algorithm should the data mining model use?
A. the Microsoft Time Series algorithm
B. the Microsoft Decision Trees algorithm
C. the Microsoft Linear Regression algorithm
D. the Microsoft Logistic Regression algorithm
Answer: A

Explanation:
Microsoft Time Series Algorithm Microsoft Time Series is used to impact a common business problem, accurate forecasting. This algorithm is often used to predict future values, such as rates of sale for a particular product. Most often the inputs are continuous values. To use this algorithm, your source data must contain at one column marked as Key Time. Any predictable columns must be of type Continuous. You can select one or more inputs as predictable columns when using this algorithm.
Time series source data can also contain an optional Key Sequence column. Function The ARTxp algorithm has proved to be very good at short-term prediction. The ARIMA algorithm is much better at longer-term prediction. By default, the Microsoft Time Series algorithm blends the results of the two algorithms to produce the best prediction for both the short and long term.
Microsoft Decision Trees Algorithm Microsoft Decision Trees is probably the most commonly used algorithm, in part because of its flexibility〞 decision trees work with both discrete and continuous attributes〞and also because of the richness of its included viewers. It*s quite easy to understand the output via these viewers. This algorithm is used to both view and to predict. It is also used (usually in conjunction with the Microsoft Clustering algorithm) to find deviant values. The Microsoft Decision Trees algorithm processes input data by splitting it into recursive (related) subsets. In the default viewer, the output is shown as a recursive tree structure. If you are using discrete data, the algorithm identifies the particular inputs that are most closely correlated with particular predictable values, producing a result that shows which columns are most strongly predictive of a selected attribute. If you are using continuous data, the algorithm uses standard linear regression to determine where the splits in the decision tree occur. Clicking a node displays detailed information in the Mining Legend window. You can configure the view using the various drop-down lists at the top of the viewer, such as Tree, Default Expansion, and so on. Finally, if you*veenabled drillthrough on your model, you can display the drillthrough information〞either columns from the model or (new to SQL Server 2008) columns from the mining structure, whether or not they are included in this model.
Microsoft Linear Regression Algorithm Microsoft Linear Regression is a variation of the Microsoft Decision Trees algorithm, and works like classic linear regression〞it fits the best possible straight line through a series of points (the sources being at least two columns of continuous data). This algorithm calculates all possible relationships between the attribute values and produces more complete results than other (non每data mining) methods of applying linear regression. In addition to a key column, you can use only columns of the continuous numeric data type. Another way to understand this is that it disables splits. You use this algorithm to be able to visualize the relationship between two continuous attributes. For example, in a retail scenario, you might want to create physical placement locations in a retail store and rate of sale for items. The algorithm result is similar to that produced by any other linear regression method in that it produces a trend line. Unlike most other methods of calculating linear regression, the Microsoft Linear Regression algorithm in SSAS calculates all possible relationships between all input dataset values to produce its results. This differs from other methods of calculating linear regression, which generally use progressive splitting techniques between the source inputs

Question No : 5
You design a Business Intelligence (BI) solution by using SQL Server 2008.
The solution includes a SQL Server 2008 Analysis Services (SSAS) database. A cube in the database contains a large dimension named Customers. The database uses a data source that is located on a remote server.
Each day, an application adds millions of fact rows and thousands of new customers. Currently, a full process of the cube takes several hours.
You need to ensure that queries return the most recent customer data with the minimum amount of latency.
Which cube storage model should you use?
A. hybrid online analytical processing (HOLAP)
B. relational online analytical processing (ROLAP)
C. multidimensional online analytical processing (MOLAP)
D. automatic multidimensional online analytical processing (automatic MOLAP)
Answer: A

Explanation:
Relational OLAP Relational OLAP (ROLAP) stores the cube structure in a multidimensional database. The leaf-level measures are left in the relational data mart that serves as the source of the cube. The preprocessed aggregates are also stored in a relational database table. When a decision maker requests the value of a measure for a certain set of dimension members, the ROLAP system first checks to determine whether the dimension members specify an aggregate or a leaf-level value. If an aggregate is specified, the value is selected from the relational table. If a leaf-level value is specified, the value is selected from the data mart. Also, because the ROLAP architecture retrieves leaf-level values directly from the data mart, the leaf-level values returned by the ROLAP system are always as up-to-date as the data mart itself. In other words, the ROLAP system does not add latency to leaf-level data. The disadvantage of a ROLAP system is that the retrieval of the aggregate and leaf-level values is slower than the other OLAP architectures.
Multidimensional OLAP Multidimensional OLAP (MOLAP) also stores the cube structure in a multidimensional database. However, both the preprocessed aggregate values and a copy of the leaf-level values are placed in the multidimensional database as well. Because of this, all data requests are answered from the multidimensional database, making MOLAP systems extremely responsive. Additional time is required when loading a MOLAP system because all the leaflevel data is copied into the multidimensional database. Because of this, times occur when the leaf-level data returned by the MOLAP system is not in sync with the leaf-level data in the data mart itself. A MOLAP system, therefore, does add latency to the leaf-level data. The MOLAP architecture also requires more disk space to store the copy of the leaf-level values in the multidimensional database. However, because MOLAP is extremely efficient at storing values, the additional space required is usually not significant.
Hybrid OLAP Hybrid OLAP (HOLAP) combines ROLAP and MOLAP storage. This is why we end up with the word ※hybrid§ in the name. HOLAP tries to take advantage of the strengths of each of the other two architectures while minimizing their weaknesses. HOLAP stores the cube structure and the preprocessed aggregates in a multidimensional database. This provides the fast retrieval of aggregates present in MOLAP structures. HOLAP leaves the leaf-level data in the relational data mart that serves as the source of the cube. This leads to longer retrieval times when accessing the leaf-level values. However, HOLAP does not need to take time to copy the leaf-level data from the data mart. As soon as the data is updated in the data mart, it is available to the decision maker. Therefore, HOLAP does not add latency to the leaf-level data. In essence, HOLAP sacrifices retrieval speed on leaf-level data to prevent adding latency to leaf-level data and to speed the data load.

Question No : 6
You design a Business Intelligence (BI) solution by using SQL Server 2008.
The solution includes a SQL Server 2008 Analysis Services (SSAS) database. The database contains a cube named Financials. The cube contains objects as shown in the exhibit.

A calculated member named Gross Margin references both Sales Details and Product Costs.
You need to ensure that the solution meets the following requirements:
What should you do?
A. Add dimension-level security and enable the Visual Totals option.
B. Add cell-level security that has read permissions on the Gross Margin measure
C. Add cell-level security that has read contingent permissions on the Gross Margin measure.
D. Change the permissions on the Managers dimension level from Read to Read/Write.

Answer: A
Explanation:
http://msdn.microsoft.com/en-us/library/ms174927.aspx User Access Security Architecture Microsoft SQL Server Analysis Services relies on Microsoft Windows to authenticate users. By default, only authenticated users who have rights within Analysis Services can establish a connection to Analysis Services. After a user connects to Analysis Services, the permissions that user has within Analysis Services are determined by the rights that are assigned to the Analysis Services roles to which that user belongs, either directly or through membership in a Windows role.
Dimension-Level Security A database role can specify whether its members have permission to view or update dimension members in specified database dimensions. Moreover, within each dimension to which a database role has been granted rights, the role can be granted permission to view or update specific dimension members only instead of all dimension members. If a database role is not granted permissions to view or update a particular dimension and some or all the dimension’s members, members of the database role have no permission to view the dimension or any of its members. Note Dimension permissions that are granted to a database role apply to the cube dimensions based on the database dimension, unless different permissions are explicitly granted within the cube that uses the database dimension.
Cube-Level Security A database role can specify whether its members have read or read/write permission to one or more cubes in a database. If a database role is not granted permissions to read or read/write at least one cube, members of the database role have no permission to view any cubes in the database, despite any rights those members may have through the role to view dimension members.
Cell-Level Security A database role can specify whether its members have read, read contingent, or read/write permissions on some or all cells within a cube. If a database role is not granted permissions on cells within a cube, members of the database role have no permission to view any cube data. If a database role is denied permission to view certain dimensions based on dimension security, cell level security cannot expand the rights of the database role members to include cell members from that dimension. On the other hand, if a database role is granted permission to view members of a dimension, cell-level security can be used to limit the cell members from the dimension that the database role members can view.

Question No : 7
You design a Business Intelligence (BI) solution by using SQL Server 2008.
The solution includes a SQL Server 2008 Reporting Services (SSRS) infrastructure in a scale-out deployment. All reports use a SQL Server 2008 relational database as the data source. You implement row-level security.
You need to ensure that all reports display only the expected data based on the user who is viewing the report.
What should you do?
A. Store the credential of a user in the data source.
B. Configure the infrastructure to support Kerberos authentication.
C. Configure the infrastructure to support anonymous authentication by using a custom authentication extension.
D. Ensure that all report queries add a filter that uses the User.UserID value as a hiddenparameter.
Answer: B

Explanation:
Question No : 8
You design a Business Intelligence (BI) solution by using SQL Server 2008.
You need to load data into your online transaction processing (OLTP) database once a week by using data from a flat file. The file contains all the details about new employees who joined your company last week. The data must be loaded into the tables shown in the exhibit. (Click the Exhibit button.) Employee.EmployeeID is an identity.

A SQL Server 2008 Integration Services (SSIS) package contains one data flow for each of the destination tables. In the Employee Data Flow, an OLE DB Command transformation executes a stored procedure that loads the Employee record and returns the EmployeeID value.
You need to accomplish the following tasks:
What should you do?
A. Use a Lookup Transformation in each of the child table data flows to find the EmployeeID based on first name and last name.
B. Store the EmployeeID values in SSIS variables and use the variables to populate the FK columns in each of the child tables.
C. After the Employee table is loaded, write the data to a Raw File Destination and use the raw file as a source for each of the subsequent Data Flows.
D. After the Employee table is loaded, write the data to a Flat File Destination and use the flat file as a source for each of the subsequent Data Flows.
Answer: C

Explanation:
http://technet.microsoft.com/en-us/library/ms141661.aspx Raw File Destination The Raw File destination writes raw data to a file. Because the format of the data is native to the destination, the data requires no translation and little parsing. This means that the Raw File destination can write data more quickly than other destinations such as the Flat File and the OLE DB destinations. You can configure the Raw File destination in the following ways: Specify an access mode which is either the name of the file or a variable that contains the name of the file to which the Raw File destination writes. Indicate whether the Raw File destination appends data to an existing file that has the same name or creates a new file.
The Raw File destination is frequently used to write intermediary results of partly processed data between package executions. Storing raw data means that the data can be read quickly by a Raw File source and then further transformed before it is loaded into its final destination. For example, a package might run several times, and each time write raw data to files. Later, a different package can use the Raw File source to read from each file, use a Union All transformation to merge the data into one data set, and then apply additional transformations that summarize the data before loading the data into its final destination such as a SQL Server table. Raw File Source The Raw File source lets us utilize data that was previously written to a raw data file by a Raw File destination. The raw file format is the native format for Integration Services. Because of this, raw files can be written to disk and read from disk rapidly. One of the goals of Integration Services is to improve processing efficiency by moving data from the original source to the ultimate destination without making any stops in between. However, on some occasions, the data must be staged to disk as part of an Extract, Transform, and Load process. When this is necessary, the raw file format provides the most efficient means of accomplishing this task.

Question No : 9
You design a Business Intelligence (BI) solution by using SQL Server 2008.
You create a SQL Server 2008 Integration Services (SSIS) package to perform an extract, transform, and load (ETL) process to load data to a DimCustomer dimension table that contains 1 million rows.
Your data flow uses the following components:
What should you do?
A. Modify the UPDATE statement in the OLE DB Command transform to use the PAGLOCK table hint.
B. Modify the UPDATE statement in the OLE DB Command transform to use the TABLOCK table hint.
C. Stage the data in the data flow. Replace the OLE DB Command transform in the dataflow with an Execute SQL task in the control flow.
D. Stage the data in the data flow. Replace the UPDATE statement in the OLE DB Command transform with a DELETE statement followed by an INSERT statement.
Answer: C

Explanation:
Data Flow Once we set the precedence constraints for the control flow tasks in the package, we can define each of the data flows. This is done on the Data Flow Designer tab. Each data flow task that was added to the control flow has its own layout on the Data Flow Designer tab. We can switch between different data flows using the Data Flow Task drop-down list located at the top of the Data Flow tab. The Data Flow Toolbox contains three types of items: data flow sources, data flow transformations, and data flow destinations. However, on some occasions, the data must be staged to disk as part of an Extract, Transform, and Load process. When this is necessary, the Raw File format provides the most efficient means of accomplishing this task. Execute SQL Task The Execute SQL task enables us to execute SQL statements or stored procedures. The contents of variables can be used for input, output, or input/output parameters and the return value. We can also save the result set from the SQL statements or stored procedure in a package variable. This result set could be a single value, a multirow/multicolumn result set, or an XML document. http://www.sqlservercentral.com/blogs/dknight/archive/2008/12/29/ssis-avoid-ole-db-command.aspxSSIS 每 Avoid OLE DB Command he OLE DB Command runs insert, update or delete statements for each row, while a Bulk Insert in this instance. That means every single row that goes through your package would have an insert statement run when it gets to an OLE DB Command. So if you know you are dealing with more than just a couple hundred rows per run then I would highly suggest using a staging table vs. the OLE DB Command.

Question No : 10
You design a SQL Server 2008 Analysis Services (SSAS) solution. The data source view has tables as shown in the exhibit. (Click the Exhibit button.)

The FactInternetSales measure will be queried frequently based on the city and country of the customer.
You need to design a cube that will provide optimal performance for queries.
Which design should you choose?
A.
.
Create two dimensions named Customer and Geography from the DimCustomer table and the DimGeography table, respectively.

.
Create a materialized reference relationship between the Geography dimension and the FactInternetSales measure by using the Customer dimension as an intermediate dimension.

B.

.
Create two dimensions named Customer and Geography from the DimCustomer table and the DimGeography table, respectively.

.
Create an unmaterialized reference relationship between the Geography dimension and the FactInternetSales measure by using the Customer dimension as an intermediate dimension.

C.

.
Create a dimension named Customer by joining the DimGeography and DimCustomer tables.

.
Add an attribute relationship from CustomerKey to City and from City to Country. Create a regular relationship in the cube between the Customer dimension and the FactInternetSales measure.

D.

.
Create a dimension named Customer by joining the DimGeography and DimCustomer tables.

.
Add an attribute relationship from CustomerKey to City and from CustomerKey to Country.

.
Create a regular relationship in the cube between the Customer dimension and the FactInternetSales measure.
Answer: C
Explanation:

Question No : 11
You design a Business Intelligence (BI) solution by using SQL Server 2008.
Employees use a Windows Forms application based on Microsoft .NET Framework 3.5. SQL Server is not installed on the employees’ computers.
You write a report by using Report Definition Language (RDL).
You need to ensure that if the employees are disconnected from the corporate network, the application renders the report.
What should you do?
A. Configure the application to use an SSRS Web service by using the Render method.
B. Configure the application to use an SSRS Web service by using the RenderStream method.
C. Embed ReportViewer in the application and configure ReportViewer to render reports by using the local processing mode.
D. Embed ReportViewer in the application and configure ReportViewer to render reports by using the remote processing mode.

Answer: C
Explanation:
Embedding Custom ReportViewer Controls Microsoft provides two controls in Visual Studio 2008 that allow you to embed SSRS reports (or link to an existing SSRS report hosted on an SSRS instance) in your custom Windows Forms or Web Forms applications. Alternatively, you can also design some types of reports from within Visual Studio and then host them in your customapplications. The two report processing modes that this control supports are remote processing mode and local processing mode. Remote processing mode allows you to include a reference to a report that has already been deployed to a report server instance. In remote processing mode, the ReportViewer control encapsulates the URL access method we covered in the previous section. It uses the SSRS Web service to communicate with the report server.Referencing deployed reports is preferred for BI solutions because the overhead of rendering and processing the often large BI reports is handled by the SSRS server instance or instances. Also, you can choose to scale report hosting to multiple SSRS servers if scaling is needed for your solution. Another advantage to this mode is that all installed rendering and data extensions are available to be used by the referenced report. Local processing mode allows you to run a report from a computer that does not have SSRS installed on it. Local reports are defined differently within Visual Studio itself, using a visual design interface that looks much like the one in BIDS for SSRS. The output file is in a slightly different format for these reports if they*re created locally in Visual Studio. It*s an *.rdlc file rather than an *.rdl file, which is created when using a Report Server Project template in BIDS. The *.rdlc file is defined as an embedded resource in the Visual Studio project. When displaying *.rdlc files to a user, data retrieval and processing is handled by the hosting application, and the report rendering (translating it to an output format such as HTML or PDF) is handled by the ReportViewer control. No server-based instance of SSRS is involved, which makes it very useful when you need to deploy reports to users that are only occasionally connected to the network and thus wouldn*t have regular access to the SSRS server. Only PDF, Excel, and image-rendering extensions are supported in local processing mode. If you use local processing mode with some relational data as your data source, a new report design area opens up. As mentioned, the metadata file generated has the *.rdlc extension. When working in local processing mode in Visual Studio 2008, you*re limited to working with the old-style data containers〞that is, table, matrix, or list. The new combined-style Tablix container is not available in this report design mode in Visual Studio 2008. Both versions of this control include a smart tag that helps you to configure the associated required properties for each of the usage modes. Also, the ReportViewer control is freely redistributable, which is useful if you*re considering using either version as part of a commercial application.

Question No : 12
You design a SQL Server 2008 Reporting Services (SSRS) solution. The solution contains a report. The report includes information that is grouped into hierarchical levels.
You need to ensure that the solution meets the following requirements:
Which feature should the report use?
A. filter
B. drilldown
C. drillthrough
D. a document map

Answer: B
Explanation: Explanation/Reference:
http://technet.microsoft.com/en-us/library/dd207141.aspx
Drillthrough, Drilldown, Subreports, and Nested Data Regions (Report Builder 3.0 and
SSRS)

You can organize data in a variety of ways to show the relationship of the general to the
detailed. You can put all the data in the report, but set it to be hidden until a user clicks to

reveal details; this is a drilldown action. You can display the data in a data region, such as a table or chart, which is nested inside another data region, such as a table or matrix. You can display the data in a subreport that is completely contained within a main report. Or, you can put the detail data in drillthrough reports, separate reports that are displayed when a user clicks a link.

Question No : 13
You design a Business Intelligence (BI) solution by using SQL Server 2008.
You plan to develop SQL Server 2008 Reporting Services (SSRS) reports. Several reports will contain identical data regions.
You need to minimize the amount of maintenance when changes are made to the data regions.
What should you do?
A. Grant the Create Linked Reports role to all users.
B. Create each data region as a report. Embed the reports by using the subreport control.
C. Create a report template for each data region. Use the report template to create each report.
D. Create a shared data source in the SSRS project. Use the new shared data source for all reports.
Answer: B

Explanation:
Question No : 14
You are designing a SQL Server 2008 Reporting Services (SSRS) solution. You have a report that has several parameters that are populated when users execute the report.
You need to ensure that the solution meets the following requirements: Which feature should you use?
A. My Reports
B. Linked Reports
C. Standard Subscription
D. Data-Driven Subscription
Answer: B

Explanation:
With a linked report, our report is deployed to one folder. It is then pointed to by links placed elsewhere within the Report Catalog. To the user, the links look just like a report. Because of these links, the report appears to be in many places. The sales department sees it in their folder. The personnel department sees it in their folder. The fact of the matter is the report is only deployed to one location, so it is easy to administer and maintain. An execution snapshot is another way to create a cached report instance. Up to this point, we have discussed situations where cached report instances are created as the result of a user action. A user requests a report, and a copy of that report*s intermediate format is placed in the report cache. With execution snapshots, a cached report instance is created automatically. Not all users can change execution snapshots. To change the execution snapshot properties for a report, you must have rights to the Manage Reports task. Of the four predefined security roles, the Content Manager, My Reports, and Publisher roles have rights to this task. (McGraw-Hill – Delivering Business Intelligence with Microsoft SQL Server 2008 (2009))
http://msdn.microsoft.com/en-us/library/bb630404.aspx A linked report is a report server item that provides an access point to an existing report. Conceptually, it is similar to a program shortcut that you use to run a program or open a file. A linked report is derived from an existing report and retains the original’s report definition. A linked report always inherits report layout and data source properties of the original report. All other properties and settings can be different from those of the original report, including security, parameters, location, subscriptions, and schedules. You can create a linked report on the report server when you want to create additional versions of an existing report. For example, you could use a single regional sales report to create region-specific reports for all of your sales territories. Although linked reports are typically based on parameterized reports, a parameterized report is not required. You can create linked reports whenever you want to deploy an existing report with different settings.
Question No : 15
You design a Business Intelligence (BI) solution by using SQL Server 2008.
You have developed SQL Server 2008 Reporting Services (SSRS) reports that are deployed on an SSRS instance.
You plan to develop a new application to view the reports. The application will be developed by using Microsoft ASP.NET 3.5.
You need to ensure that the application can perform the following tasks:
What should you do?
A. Configure the ASP.NET application to use the SSRS Web service.
B. Configure the ASP.NET application to use URL access along with the Command parameter.
C. Embed a ReportViewer control in the ASP.NET application. Configure the control to use the local processing mode.
D. Embed a ReportViewer control in the ASP.NET application. Configure the control to use the remote processing mode.

Answer: A
Explanation:
Report Server Web Service The Report Server Web service is the core engine for all on-demand report and model processing requests that are initiated by a user or application in real time, including most requests that are directed to and from Report Manager. It includes more than 70 public methods for you to access SSRS functionality programmatically. The Report Manager Web site accesses these Web services to provide report rendering and other functionality. Also,other integrated applications, such as the Report Center in Office SharePoint Server 2007, call SSRS Web services to serve up deployed reports to authorized end users. The Report Server Web service performs endto- end processing for reports that run on demand. To support interactive processing, the Web service authenticates the user and checks the authorization rules prior to handing a request. The Web service supports the default Windows security extension and custom authentication extensions. The Web service is also the primary programmatic interface for custom applications that integrate with Report Server, although its use is not required. If you plan to develop a custom interface for your reports, rather than using the provided Web site or some other integrated application (such as Office SharePoint Server 2007), you*ll want to explore the SQL Server Books Online topic ※Reporting Services Web Services Class Library.§ There you can examine specific Web methods.

Question No : 16
You design a Business Intelligence (BI) solution by using SQL Server 2008.
You design a SQL Server 2008 Reporting Services (SSRS) report that meets the following requirements:
You need to design the report to minimize the impact on bandwidth.
What should you do?
A. Create a standard report that contains all sales orders. Implement report filtering based on the month.
B. Create a standard report that contains all sales orders. Implement grouping for the monthly summaries.
C. Create a standard report that contains the monthly summaries. Create a subreport for the sales orders for any given month.
D. Create a standard report that contains the monthly summaries. Create a drillthrough report for the sales orders for any given month.
Answer: D

Explanation:
Drillthrough Action Defines a dataset to be returned as a drillthrough to a more detailed level. Creating Drillthrough Actions For the most part, Drillthrough Actions have the same properties as Actions. Drillthrough Actions do not have Target Type or Target Object properties. In their place, the Drillthrough Action has the following:
-Drillthrough Columns Defines the objects to be included in the drillthrough dataset.
-Default A flag showing whether this is the default Drillthrough Action.
-Maximum Rows The maximum number of rows to be included in the drillthrough dataset. http://technet.microsoft.com/en-us/library/ff519554.aspx
Drillthrough Reports(Report Builder 3.0 and SSRS) A drillthrough report is a report that a user opens by clicking a link within another report. Drillthrough reports commonly contain details about an item that is contained in an original summary report. The data in the drillthrough report is not retrieved until the user clicks the link in the main report that opens the drillthrough report. If the data for the main report and the drillthrough report must be retrieved at the same time, consider using a subreport

Question No : 17
You design a Business Intelligence (BI) solution by using SQL Server 2008.
You create a sales report by using SQL Server 2008 Reporting Services (SSRS). The report is used by managers in a specific country.
Each manager prints multiple copies of the report that contains the previous day’s sales for each of their sales executives.
You need to ensure that the report uses the minimum number of round trips to the database server.
What should you do?
A. Query the database for both Country and Sales Executive.
B. Implement report filtering for both Country and Sales Executive.
C. Implement report filtering for Country and query the data source for Sales Executive.
D. Implement report filtering for Sales Executive and query the data source for Country.
Answer: D

Explanation:
http://technet.microsoft.com/en-us/library/dd239395.aspx Choosing When to Set a Filter Specify filters for report items when you cannot filter data at the source. For example, use report filters when the data source does not support query parameters, or you must run stored procedures and cannot modify the query, or a parameterized report snapshot displays customized data for different users. You can filter report data before or after it is retrieved for a report dataset. To filter data before it is retrieved, change the query for each dataset. When you filter data in the query, you filter data at the data source, which reduces the amount data that must be retrieved and processed in a report. To filter data after it is retrieved, create filter expressions in the report. You can set filter expressions for a dataset, a data region, or a group, including detail groups. You can also include parameters in filter expressions, providing a way to filter data for specific values or for specific users, for example, filtering on a value that identifies the user viewing the report.

Question No : 18
You design a Business Intelligence (BI) solution by using SQL Server 2008.
You plan to create a SQL Server 2008 Reporting Services (SSRS) report. The report must display the list of orders placed through the Internet.
You need to ensure that the following requirements are met:
Which type of report should you create?
A. Linked
B. Ad Hoc
C. Cached
D. Snapshot
Answer: D

Explanation:
http://msdn.microsoft.com/en-us/library/bb630404.aspx#Snapshot A report snapshot is a report that contains layout information and query results that were retrieved at a specific point in time. Unlike on-demand reports, which get up-to-date query results when you select the report, report snapshots are processed on a schedule and then saved to a report server. When you select a report snapshot for viewing, the report server retrieves the stored report from the report server database and shows the data and layout that were current for the report at the time the snapshot was created. Report snapshots are not saved in a particular rendering format. Instead, report snapshots are rendered in a final viewing format (such as HTML) only when a user or an application requests it. Deferred rendering makes a snapshot portable. The report can be rendered in the correct format for the requesting device or Web browser. Report snapshots serve three purposes:
-Report history. By creating a series of report snapshots, you can build a history of a report that shows how data changes over time.
-Consistency. Use report snapshots when you want to provide consistent results for multiple users who must work with identical sets of data. With volatile data, an on-demand report can produce different results from one minute to the next. A report snapshot, by contrast, allows you to make valid comparisons against other reports or analytical tools that contain data from the same point in time.
-Performance. By scheduling large reports to run during off-peak hours, you can reduce processing impact on the report server during core business hours.

Question No : 19
You are creating a SQL Server 2008 Reporting Services (SSRS) solution for a company that has offices in different countries. The company has a data server for each country.
Sales data for each country is persisted in the respective data server for the country. Report developers have only Read access to all data servers. All data servers have the same schema for the database.
You design an SSRS solution to view sales data.
You need to ensure that users are able to easily switch between sales data for different countries.
What should you do?
A. Implement a single shared data source.
B. Implement multiple shared data sources.
C. Implement an embedded data source that has a static connection string.
D. Implement an embedded data source that has an expression-based connection string.
Answer: D

Explanation:
http://msdn.microsoft.com/en-us/library/ms156450.aspx Expression-based connection stringsare evaluated at run time. For example, you can specify the data source as a parameter, include the parameter reference in the connection string, and allow the user to choose a data source for the report. For example, suppose a multinational firm has data servers in several countries. With an expression-based connection string, a user who is running a sales report can select a data source for a particular country before running the report. Design the report using a static connection string. A static connection string refers to a connection string that is not set through an expression (for example, when you follow the steps for creating a report-specific or shared data source, you are defining a static connection string). Using a static connection string allows you to connect to the data source in Report Designer so that you can get the query results you need to create the report. When defining the data source connection, do not use a shared data source. You cannot use a data source expression in a shared data source. You must define an embedded data source for the report. Specify credentials separately from the connection string. You can use stored credentials, prompted credentials, or integrated security. Add a report parameter to specify a data source. For parameter values, you can either provide a static list of available values (in this case, the available values should be data sources you can use with the report) or define a query that retrieves a list of data sources at run time.
Be sure that the list of data sources shares the same database schemA. All report design begins with schema information. If there is a mismatch between the schema used to define the report and the actual schema used by the report at run time, the report might not run. Before publishing the report, replace the static connection string with an expression. Wait until you are finished designing the report before you replace the static connection string with an expression. Once you use an expression, you cannot execute the query in Report Designer. Furthermore, the field list in the Report Data pane and the Parameters list will not update automatically.

Question No : 20
You design a Business Intelligence (BI) solution by using SQL Server 2008.

The solution will contain a total of 100 different reports created by using Report Definition
Language (RDL).
Each report must meet the following requirements:
The business rules for all reports that determine the calculations change frequently.
You need to design a solution that meets the requirements. You need to perform this action

by using the minimum amount of development and maintenance effort.

What should you do?
A. Create hidden parameters in each report.
B. Create internal parameters in each report.
C. Implement the function in the <Code> element of each report.
D. Implement the function in a custom assembly. Reference the assembly in each report.
Answer: D

Explanation:
http://msdn.microsoft.com/en-us/library/ms159238.aspx Including References to Code from Custom Assemblies To use custom assemblies in a report, you must first create the assembly, make it available to Report Designer, add a reference to the assembly in the report, and then use an expression in the report to refer to the methods contained in that assembly. When the report is deployed to the report server, you must also deploy the custom assembly to the report server. To refer to custom code in an expression, you must call the member of a class within the assembly. How you do this depends on whether the method is static or instance-based. Static methods within a custom assembly are available globally within the report. You can access static methods in expressions by specifying the namespace, class, and method name. The following example calls the method ToGBP, which converts the value of the StandardCost value from dollar to pounds sterling: Copy =CurrencyConversion.DollarCurrencyConversion.ToGBP(Fields!StandardCost.Value) Instance-based methods are available through a globally defined Code member. You access these by referring to the Code member, followed by the instance and method name. The following example calls the instance method ToEUR, which converts the value of StandardCost from dollar to euro: Copy =Code.m_myDollarCoversion.ToEUR(Fields!StandardCost.Value) Note In Report Designer, a custom assembly is loaded once and is not unloaded until you close Visual Studio. If you preview a report, make changes to a custom assembly used in the report, and then preview the report again, the changes will not appear in the second preview. To reload the assembly, close and reopen Visual Studio and then preview the report

microdess
We are a team that focuses on tutoring Microsoft series certification exams and is committed to providing efficient and practical learning resources and exam preparation support to candidates. As Microsoft series certifications such as Azure, Microsoft 365, Power Platform, Windows, and Graph become more and more popular, we know the importance of these certifications for personal career development and corporate competitiveness. Therefore, we rely on the Pass4itsure platform to actively collect the latest and most comprehensive examination questions to provide candidates with the latest and most accurate preparation materials. MICROSOFT-TECHNET not only provides the latest exam questions, but also allows candidates to find the required learning materials more conveniently and efficiently through detailed organization and classification. Our materials include a large number of mock test questions and detailed analysis to help candidates deeply understand the test content and master the answering skills, so as to easily cope with the test. In addition, we have also specially launched exam preparation materials in PDF format to facilitate candidates to study and review anytime and anywhere. It not only contains detailed analysis of exam questions, but also provides targeted study suggestions and preparation techniques so that candidates can prepare more efficiently. We know that preparing for exams is not just about memorizing knowledge points, but also requires mastering the correct methods and techniques. Therefore, we also provide a series of simulation questions so that candidates can experience the real examination environment in the simulation examination and better adapt to the examination rhythm and atmosphere. These simulation questions can not only help candidates test their preparation results, but also help candidates discover their own shortcomings and further improve their preparation plans. In short, our team always adheres to the needs of candidates as the guide and provides comprehensive, efficient and practical test preparation support to candidates. We believe that with our help, more and more candidates will be able to successfully pass the Microsoft series certification exams and realize their career dreams.