Wednesday 27 December 2017

Data quality metrics for Performance Measurement

Introduction


The data quality can be measured through metrics which in turn helps to identify the issue and helps the performance engineer to create the data or modify the data to adhere to the quality. Data quality depends on type of application, type of tables/views used in the application etc. If the data quality metrics are not adhered, the performance measurement gets compromised.

SAP applications are used by many companies. With the availability of SAP HANA platform, the business applications that are developed at SAP has undergone paradigm shift. The complex operations are push down to the database. The rule of thumb is therefore to get the best performance by doing as much as you can in the database. Applications that are developed on SAP HANA uses new data modeling infrastructure known as Core Data Services (CDS). With CDS views, data models are defined and consumed on the database rather than on the application server. The application developer can now use various built-in functions, extensions etc.

Performance Test Process

The Fiori application that are developed basically try to make most of the SAP HANA platform. In S/4Hana applications, whenever there is a request made from the Fiori application for retrieving the information, hits the CDS views. The SQL query with CDS view name in FROM clause along with the filters are passed on to the HANA database. The query gets executed in the HANA database and returns the result set to the Fiori UI.

To measure the performance of the Fiori application against single user, usually performance test starts with executing the dry runs and then measure the performance of the application subsequently. The measured performance is then compared with the thresholds that are defined and violation are identified.

Data Quality


For measuring the performance of the application, the data quality plays a crucial role. The test system wherein the performance measurement needs to be taken, should have adequate quality of data. There are CDS views that are used more frequently and has high volume of transactions when compared to others. So, there is a need to distinguish between the CDS views that has high volume and used more frequently. This kind of views need to adhere to the performance standards and lag in response is not expected. The lag in response may occur due to the following factors:

1. Filters are not push down correctly
2. Join conditions or cyclic joins can degrade the performance
3. Redundant union or joins that exist in the CDS views
4. Currency conversion not modeled appropriately
5. Execution of CDS views generates many temporary tables. The cause may be due to materializing of the fields with aggregation on large set of rows

While designing the CDS views the above factors needs to be considered. Also, apart from the above factors there are cases wherein when the system is having very less data (<1000 rows) then the performance issues are not identified. The inherent problem within CDS view gets visible or detected when the system is having bare minimum amount of data based on the annotation like Service Quality and Size are defined.

Data Quality Metrics:


The CDS views are often categorized into ‘S’, ‘M’, ‘L’, ‘XL’ and ‘XXL’. In the CDS view, ObjectModel.usageType.sizeCategory annotation is used define the size based on the volume of data the view can expect.

The resource consumption on HANA is mainly driven by two factors:

◈ The set of data which has to be searched through and
◈ The set of data which has to be materialized in order to compute the result set.This metrics helps to identify whether sufficient number of rows exist in the HANA database. This metrics are just an indicator on whether the performance measurement for single user test can be performed or not. If these bare minimum criteria are not met, then one won’t be able to unearthen the defects that may creep over a period, when the data grows.
◈ Size category S should have less than 1000 rows. Similarly, size category M should have less than 10^5. For L, it will be 10^7 rows.

SQL statement to retrieve the table information:


The report program can be written wherein the user input is a CDS view name. With the below statement, initially it is identified whether the given input is a CDS view or not.

SELECT OBJECTNAME FROM DDLDEPENDENCY WHERE STATE = ‘A’ AND OBJECTTYPE = ‘STOB’ AND DDLNAME = ‘<CDS Name>’         INTO TABLE @DATA(ENTITYTAB).

Here <CDS_Name> to be filled is the CDS view name.

For example:

SAP Online Guides, SAP Learning, SAP Certifications, SAP Tutorials and Materials

SELECT OBJECTNAME FROM DDLDEPENDENCY WHERE STATE = 'A' AND OBJECTTYPE = 'STOB' AND DDLNAME = ‘A_CHANGEMASTER’ INTO TABLE @DATA(ENTITYTAB).

The “API_CHANGEMASTER” is whitelisted service listed in SAP API Hub. When this ODATA service is invoked by any client side application (Fiori or custom application) then it internally hits “A_CHANGEMASTER” CDS view.

When we execute the above query, we will be able to retrive the object name. In this case, it is a valid and activated CDS view. Once we get the object name, we can get the number of tables used by the CDS view.

When we want to retrieve all the CDS views that starts with ‘A_C%’ then it can be done as follows:

SELECT DISTINCT SRC~DDLNAME, TADIR~DEVCLASS AS PACKAGE, TADIR~AUTHOR AS AUTHOR
        FROM TADIR INNER JOIN DDDDLSRC AS SRC ON TADIR~OBJ_NAME = SRC~DDLNAME 
       WHERE TADIR~PGMID = 'R3TR' AND TADIR~OBJECT = 'DDLS'
       AND   SRC~AS4LOCAL = 'A'
       AND SRC~DDLNAME = 'A_C%'
       INTO TABLE @DATA(DDLS).

   IF LINES( DDLS ) > 0.
      SELECT OBJECTNAME FROM DDLDEPENDENCY FOR ALL ENTRIES IN @DDLS
        WHERE STATE = 'A' AND OBJECTTYPE = 'STOB' AND DDLNAME = @DDLS-DDLNAME
        INTO TABLE @DATA(ENTITYTAB).

Now loop through the tables to find the number of rows that present in the database. For CDS view size category, this is the starting point to know the quality of the data. To be more stringent, based on the type of application, we can also check for Distinct entries that are present in the table. This will help to identify whether the enough data is present in the table. If there is less number of entries, then the quality engineer must create the data before taking the performance measurement.

IF SY-SUBRC = 0.
        CREATE OBJECT LR_VISITOR TYPE CL_DD_DDL_META_NUM_COLLECTOR EXPORTING DESCEND = ABAP_TRUE .
        data l_name type string.

        LOOP AT ENTITYTAB INTO data(LV_ENAME).
          l_name = lv_ename-objectname.
          LR_VISITOR->VISITDDLSOURCE( iv_dsname = l_name ).
          DATA(LR_NUMBERMAP) =  LR_VISITOR->GETNUMBERMAP( ).
          READ TABLE LR_NUMBERMAP ASSIGNING FIELD-SYMBOL(<CDS_VIEW>)
      WITH KEY ENTITY = LV_ENAME-OBJECTNAME.
  IF SY-SUBRC NE 0.
    CONTINUE.
  ENDIF.
ENDIF.

data tablename type string.
  data count type i.
  loop at <cds_view>-numbers-tab_info-table_tab assigning field-symbol(<tab_info>).

    collect <tab_info> into lr_tabs.
    tablename = <tab_info>-TABNAME.
    select count(*) from (tablename) INTO count.
  endloop.

Here we get number of entries present in the table. For the above example, there is only 1 table i.e. AENR. When we query for number of entries in the table AENR using count (*), and output is 100 rows. This is less as the size category annotation is specified as ‘L’. So, bare minimum rows that are required before starting with the performance system is somewhere between 10^5 and less than 10^5.

Sunday 26 November 2017

Manufacturer Part Profile in QM

Purpose: The Materials Management (MM) component supports the procurement of manufacturer-specific parts or materials from different vendors. If you implement the functions for manufacturer part number (MPN) processing, you can also process goods receipt inspections for manufacturer-specific parts or materials in the Quality Management (QM) component.

We Can

◉ Block or release a request to deliver
manufacturer-specific parts or materials in a quality info record
◉ Waive the inspection requirement for
manufacturer-specific parts or materials (provided the vendor and manufacturer
have certified QM systems in use)
◉ Use manufacturer-specific inspection
plans to inspect the manufacturer parts or materials

1.Configuration Requirement

Create Profile Part Profile

T Code: OMPN

SAP QM, SAP Live, SAP Learning, SAP Guides, SAP Tutorial, Material and Certification

2.Assign this profile for Internal Material

Create Manufacturer Part in MM01

SAP QM, SAP Live, SAP Learning, SAP Guides, SAP Tutorial, Material and Certification

Part – 1

This is procured from 1st Manufacturer: MF-1000

Create Material with (Material Type- HERS)

SAP QM, SAP Live, SAP Learning, SAP Guides, SAP Tutorial, Material and Certification

Part – 2

This is procured from 2nd Manufacturer: MF-1001

Create Material with (Material Type- HERS)

SAP QM, SAP Live, SAP Learning, SAP Guides, SAP Tutorial, Material and Certification

3. Create APML list

If you wish to influence the precise source or quality of materials, you can tell the vendor from whom you want to procure a material which manufacturer the material is to be supplied by and the part number used by that manufacturer. You can also tell your vendor exactly which of a manufacturer’s plants is to supply the material you wish to procure. The manufacturer’s part number (MPN) and description, as well as the specific manufacturing plant (if applicable), with the help of APML List.

T Code: MP01

SAP QM, SAP Live, SAP Learning, SAP Guides, SAP Tutorial, Material and Certification

4. Create Q-info Record

Beacuse of certain quality reason we Block Manufacturer MF-1001 for creation of Purchase order.

SAP QM, SAP Live, SAP Learning, SAP Guides, SAP Tutorial, Material and Certification

5. PO

Then at the time Po Creation it gives the error as shown in below screen shot

SAP QM, SAP Live, SAP Learning, SAP Guides, SAP Tutorial, Material and Certification

Above screen shows that material is blocked for procurement for quality reasons.

This scenario conclude that Manufacturer quality is controlled by this way.

Saturday 18 November 2017

Using Temporal Join in Composite Provider in BW/4HANA

Introduction


◉ From SAP BW 7.4 and in BW/4HANA new Composite Providers are the main objects for define unions / joins of existing persistent or virtual data models.
◉ Composite Providers are successors of MultiProviders and BW InfoSets. In classic BW Warehouse only BW InfoSet were responsible for SQL Join between InfoProviders.
◉ From SAP BW 7.5 SP04 and in SAP BW/4HANA Composite Providers also support modeling of temporal joins in order to show time flows.

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Demo Data Model


Let’s consider simple sales data model to demonstrate work of temporal join in HCPR.

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Advanced DSO and InfoObjects in BWMT


Transaction sales data were loaded in aDSO ZAD_SALES.

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Master data were loaded to time-dependent attributes of characterictics ZMANAGER and ZPRODUCT.

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Composite Provider in BWMT


The aim of temporal join usage is to analyze sales volume with attribute values at date of actual sale transaction occurred, not at current date for example.

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

First we joined ZAD_SALES with ZMANAGER, don’t forget to select Key date ZDATE in aDSO. We had to add another time characteristic, because characteristic 0DATE weren’t allowed for key date selection.

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Next step we joined result of first join J1 with ZPRODUCT:

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

As a result output we had:

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Query in BWMT


We created a simple query for analyzing if temporal join is working correctly. Query definition is very simple.

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Query Monitor 


First of all we started query in RSRT for analyzing join SQL statement. Temporal join restrictions were placed at WHERE. 

SELECT "J1ADSO2"."/BIC/ZSALESID" AS "K____5031",
       "J1IOBJ1"."/BIC/ZGRADE" AS "K____5032",
       "J1ADSO2"."/BIC/ZMANAGER" AS "K____5034",
       "J2IOBJ1"."/BIC/ZPRODMN" AS "K____5042",
       "J1ADSO2"."/BIC/ZPRODUCT" AS "K____5043",
       "J1ADSO2"."/BIC/ZDATE" AS "K____5065",
       "J1IOBJ1"."DATEFROM" AS "K____5077",
       "J1IOBJ1"."DATETO" AS "K____5078",
       "J2IOBJ1"."DATEFROM" AS "K____5086",
       "J2IOBJ1"."DATETO" AS "K____5087",
       SUM ("J1IOBJ1"."/BIC/ZBONUS") AS "Z____5033_SUM",
           SUM ("J2IOBJ1"."/BIC/ZPRICE") AS "Z____5035_SUM",
               SUM ("J1ADSO2"."/BIC/ZVOLUME") AS "Z____5054_SUM",
                   COUNT(*) AS "Z____1160_SUM"
FROM "/BIC/AZAD_SALES7" "J1ADSO2"
JOIN "/BIC/MZMANAGER" "J1IOBJ1" ON "J1ADSO2" . "/BIC/ZMANAGER" = "J1IOBJ1" . "/BIC/ZMANAGER"
JOIN "/BIC/MZPRODUCT" "J2IOBJ1" ON "J1ADSO2" . "/BIC/ZPRODUCT" = "J2IOBJ1" . "/BIC/ZPRODUCT"
WHERE "J1IOBJ1"."OBJVERS" = 'A'
  AND "J2IOBJ1"."OBJVERS" = 'A'
  AND "J1IOBJ1"."DATEFROM" <= "J2IOBJ1"."DATETO"
  AND "J2IOBJ1"."DATEFROM" <= "J1IOBJ1"."DATETO"
  AND "J1IOBJ1"."DATEFROM" <= "J1ADSO2"."/BIC/ZDATE"
  AND "J1ADSO2"."/BIC/ZDATE" <= "J1IOBJ1"."DATETO"
  AND "J2IOBJ1"."DATEFROM" <= "J1ADSO2"."/BIC/ZDATE"
  AND "J1ADSO2"."/BIC/ZDATE" <= "J2IOBJ1"."DATETO"
GROUP BY "J1ADSO2"."/BIC/ZSALESID",
         "J1IOBJ1"."/BIC/ZGRADE",
         "J1ADSO2"."/BIC/ZMANAGER",
         "J2IOBJ1"."/BIC/ZPRODMN",
         "J1ADSO2"."/BIC/ZPRODUCT",
         "J1ADSO2"."/BIC/ZDATE",
         "J1IOBJ1"."DATEFROM",
         "J1IOBJ1"."DATETO",
         "J2IOBJ1"."DATEFROM",
         "J2IOBJ1"."DATETO"
ORDER BY "K____5031" ASC,
         "K____5032" ASC,
         "K____5034" ASC,
         "K____5042" ASC,
         "K____5043" ASC,
         "K____5065" ASC,
         "K____5077" ASC,
         "K____5078" ASC,
         "K____5086" ASC,
         "K____5087" ASC

Analyze Data Result


We opened query in Analysis for Excel, resulted data showed that temporal join were performed correctly, e.i.:

◉ First manager in October had GRADE_05 and in November – GRADE_15.
◉ Price of products showed also different in October and in November.

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Even if we exclude almost all characteristics, show only sales volumes by sales managers grades and product names. History perspective is still correct.

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Performance and Notifications 


◉ Unfortunately processing of Temporal Join in HCPR is not currently pushed-down to HANA. It is “under discussion” status. 
◉ It means that it is working like old style BW InfoSet and performance are expected the same.
◉ During activation of HCPR with temporal join we had a reminder:


◉ In Query Monitor we didn’t get additional “HANA Calculation Engine Layer” Tab.
◉ In BWMT properties of HCPR and properties of BW Query weren’t changed

SAP Learning, SAP BW/4HANA, SAP HANA Modeling, SAP Certification, SAP Tutorial and Material

Thursday 16 November 2017

Submission Targets in BW Workspace

With a usual approach in data loading with minimal cleansing operation, a user has to create a transformation and data transfer process(DTP) for cleansing and loading the data.

But, With 750 SP04 on HANA, a new agile feature: Submittable providers in BW Workspace was introduced, Where the cleansing and loading of data to the Provider (Advanced DataStore Object) is possible without creating the transformation and data transfer process. Target users of BW Workspace (mainly business users) with modeling authorization can perform this in an agile manner, however  under control of BW administrator.

The process involves the Workspace Administration and Workspace Designer.  In the Workspace Administration, the user must define the BW Provider as Submission Target to which the local data should be loaded from the workspace. In the Provider submission settings for a provider, cleansing operation such as Filter, Master Data check etc..  can be pre-defined by administrator to guarantee a certain quality on uploaded data.

In the Workspace Designer, business user loads the local data to workspace through local provider and submits the data to the BW Provider added in the submission target area in Workspace Administration via Submit Data option.

How to load the local data from the file to the BW Provider (ADSO) in BW Workspace?

Create a BW Workspace from transaction: RSWSPW. In the Submission Targets tab, by opening the InfoProvider Tree, search for the provider and Add it to the submission area.

SAP Live, SAP All Modules, SAP Module, SAP Material, SAP Certification, SAP Learning

SAP Live, SAP All Modules, SAP Module, SAP Material, SAP Certification, SAP Learning

The fields in the ADSO can be taken as writeable and mandatory Fields. If the fields are marked as writeable, then only those fields are exposed for the data submission and mapping of fields from the local provider. If the fields are marked as mandatory, then those fields has to be mapped to the column from the local provider and they cannot be deselected from the workspace designer.

If none of the fields is selected as writeable or mandatory, then no field is exposed to data submissionà No this leads to an error. You have to select at least one writeable field. Non selection does not make any sense!!!!

Key fields of aDSO are by default marked as writeable and mandatory and cannot be changed

SAP Live, SAP All Modules, SAP Module, SAP Material, SAP Certification, SAP Learning

Cleansing operation can be done in 2 ways, First from Workspace administration – defining the submission setting for provider using the Provider Submission Settings option. In the Provider Submission Settings, the admin can select a field and click on Filter icon and filter can be defined for a field by adding a new line. If the Master Data flag is checked, at the submission of data, data integrity check is performed and if the data to be loaded doesn’t match with the Master data values of the Infoobject, an error is thrown for the submission of data. In this case the second way to specify additional cleansing operation can be applied. Namely by the business user in the Workspace Designer. User is able to define additional filter and cleansing rules to transform local file data. More about this will be explained later in this document.

Rules defined in Workspace Administration have to be fulfilled during data submission from local provider. In this way the admin can control the quality and values of data that lands in the aDSO.

SAP Live, SAP All Modules, SAP Module, SAP Material, SAP Certification, SAP Learning

SAP Live, SAP All Modules, SAP Module, SAP Material, SAP Certification, SAP Learning

To submit the data from the local provider to ADSO, go to Workspace Designer e.g. by executing the transaction NWBC. In the Workspace Designer, click on the submit data. In the Submit Data page, provide the Source (Local Provider) and Target provider (ADSO) name.

SAP Live, SAP All Modules, SAP Module, SAP Material, SAP Certification, SAP Learning

In the Field Mapping  step, fields from ADSO that are marked as mandatory in Workspace Administration, are pre-selected and deselection is disabled for those columns in the Workspace Designer, i.e these fields have to be mapped to the columns in the local provider. If the fields are not mandatory, then the selection column will be enabled for the user to enable for a field if the data has to be loaded to that field.

In the Field Mapping  step, additional filter and cleansing operations can be performed on local provider rows, by selecting a mapping pair and clicking on add icon for filter and define an additional filter. Filters set by admin appear also here but cannot be changed.  Cleansing operations such as, Checking the Master Data Values for the Field, Initialize the Values if master data values if not found in the local data, Checking the Conversion error if any, can be performed by defining the cleaning operation for  each single field. There is also the option to specify some rules that will apply for all fields by clicking on the General Cleansing button.

SAP Live, SAP All Modules, SAP Module, SAP Material, SAP Certification, SAP Learning

Once the Mapping is specified , checks of all the defined conditions are fulfilled, local provider data will be load to the target ADSO through a Request TSN. On submission, a Write API will be prepared and start writing to an inbound queue in ADSO. Once the data is written into inbound queue, the request can be activated to get the data in active table and can be used in the Queries thereafter. Whether data is directly activated or not can be controlled by Workspace administration.

Wednesday 15 November 2017

Multiple attribute values for one line of data: how to avoid infosets in BW without HANA

Introduction


I’d like to share my experience and to analyze possible solutions for a BW task which I’ve met on several retail projects.

I think this problem is quite interesting because it’s one of the examples of the general idea: how can we avoid using joins (infosets) on BW systems without HANA.

Problem description


Let’s imagine that we have POS data (receipts), and every position of every receipt can be related to many(!) promo-actions or discounts. It means that several actions can be applied to the same receipt position. For example, one of them was season discount, the second one – loyalty action from CRM, the third one was bonus-buy action and so on. This data is loaded to BW from POS system (the concrete method is not important in this case, it may be POSDM or abap-proxy or something else).

So, we have the following data:

SAP BW, SAP Live, SAP Guides, SAP Tutorial and Material, SAP Certification, SAP HANA

Users want to filter sales data in the reports by any action (one or many). And of course in the report we have to show full amount from the receipt position, regardless of filter by promo. For example, if user filter report by action P001 than the amount in the report should be 100, but not 33,33.

So, the question is: how to organize such data in BW, which data structure should we make?

Possible solutions


There are several different variants:

Variant A. In one infoprovider (infocube or dso) we store receipt positions with quantity, amount and so on, in the second infoprovider we store combination [Receipt Number, Position Number, Promo ID]. Than we combine them by an InfoSet and make a Bex-query on it. This is standard BW method and the easiest data structure, but it has one problem – it doesn’t work on large data volume.

If we have a BW on HANA, then we can make two ADSO and a composite provider which realize join between them. If we make this structure correctly, then this join will work on HANA level and it will be very fast.

But if we work with BW on classis DB, than using infosets will lead to join between two very large tables (because both of them consists position of receipt), and this join will work extremely slow. The size of every table can be tens of millions records and even more. So Bex-query in such case will work for hours or will not work at all.

Variant B. The second variant is to use one infocube and to multiply every line for receipt position during loading data to the cube as many times as the number of actions for this line. In my example it will be:

SAP BW, SAP Live, SAP Guides, SAP Tutorial and Material, SAP Certification, SAP HANA

In such variant we need to use special aggregation for receipt key figures (such as quantity or amount), otherwise they will be multiplied in reports (300 instead of 100).

It is rather bad variant too, because using special aggregation by the dimensions “receipt” and “receipt position” will not allow us to use aggregates. In such case aggregated reports will use the original infocube instead of it’s aggregates, special aggregation will be calculated at application server and the report’s speed will be very bad. Moreover, in such variant we need to multiply number of lines in the infocube, which is already very large. This will increase the load time and disk space used.

Variant С. The variant which I finally chose. This variant is based on the following idea: we make an additional service dimension which stores a combination of promo IDs for every receipt’s position. The code of this dimension is the concatenation of promo IDs separated by some special symbol, for example, by “&”. This dimension is filled during data load and is used for filtering data by any combination of promo-actions.

So, in this variant we should make the following steps:

1. Make an organizational restriction about maximum number of promo-actions allowable for receipt position, for example, five.

2. Make a new set of characteristics: ZPROMO1, …, ZPROMO5 type CHAR10 like 0RT_PROMO character.

3. Make an additional characteristic ZPROMOALL type CHAR60.

4. To the end-routine of transformation of POS data (to the infocube or to DSO) we add the code which fill this characteristic.

As a result we get the cube of the following structure:

SAP BW, SAP Live, SAP Guides, SAP Tutorial and Material, SAP Certification, SAP HANA

On the BEX query level:

1. On the ZPROMO1 dimension we make input-ready variable ZPROMO_V1 (several single values).

2. Make some dummy hidden key figure, inside it we restrict ZPROMO1 dimension by this variable.

3. Restrict ZPROMOALL dimension by the exit-variable ZVAR_PROMOALL (below is the full abap-code for it). In this exit we find all values of dimension ZPROMOALL which consist at least one of promo IDs selected by the user in the ZPROMO_V1 variable. This set of ZPROMOALL values is used for filtering.

As a result, user can set filter by any number of promo-actions and the query result will contain all receipt positions, which are related to one or more action from the users’ filter. Then, is necessary, user can add to report separate dimensions “Promo1”, …, “Promo5”. This approach does not require any special aggregation, allows us to use aggregates and therefore provides high reports performance.

So, the best variant is to use BW on HANA and to forget about such problem with joins:)  But there are still BW implementations on classic DB, so this information may be useful. I used the last variant  (var. C) on two such projects and it really worked.

ABAP-code for user-exit:

    if i_step = '2'.
      Types: begin of ty_promo_all,
              code type /bic/oizpromoall,
             end of ty_promo_all.
      Data:
        t_promo_all type sorted table of ty_promo_all with UNIQUE key code,
        s_promo_all type ty_promo_all,
        s_str type string,
        l_promo_all type /bic/oizpromoall.

      clear t_promo_all.
      loop at i_t_var_range into loc_var_range
        where vnam = 'ZPROMO_V1'.
        if loc_var_range-low is not initial.
          concatenate '%' loc_var_range-low '%' into  s_str.
          select /BIC/ZPROMOALL into l_promo_all from /BIC/SZPROMOALL
            where /BIC/ZPROMOALL like s_str.
            clear s_promo_all.
            s_promo_all-code = l_promo_all.
            collect s_promo_all into t_promo_all.
          endselect.
        endif.
      endloop.

      loop at t_promo_all into  s_promo_all.
        clear l_s_range.
        l_s_range-sign = 'I'.
        l_s_range-opt  = 'EQ'.
        l_s_range-low = s_promo_all-code.
        append l_s_range to e_t_range.
      endloop.
    endif.

Tuesday 14 November 2017

Routines in SAP BW 7.x

There are different types of Routines available to write (ABAP) in SAP BW for data modification/population at different levels. Routines helps us to implement Business Rules or setup better Data Model which is not available in standard way.

This document will cover below routines in SAP BW (7.x).
(Covering “when to write” examples however “how to write” is kept on exploring mode)

◉ Transfer Routine
◉ Start Routine
◉ End Routine
◉ Expert Routine
◉ Field Routine

Transfer Routine will be available at Infoobject level whereas Start, End, Field & Expert Routine will be available at Transformation level.

In Transformation, the execution sequence either will be Start Routine -> Field Routine -> End Routine or only Expert Routine. Options for Start, End & Field Routine will be disabled once Transformation with Expert Routine is created.

◉ Transfer Routine: Available for Char/Currency/Unit infoobjects.


This routine is available at Infoobject level and executes only when data from DataSource is transferred to immediate infoprovider.

Purpose: Data transformation at Infoobject level. Data will be Refined as soon as it enters BW Infoprovider (data model).

Example: Suppose a BW is connected to 2 different ECC. One of the ECC stores unit for Litre as “L” and other as “LT”. In this case BW wants to maintain a single unit for Litre as “L” then Transfer Routine will ensure this process is followed and “LT” is converted to “L” using ABAP.

SAP BW, SAP Live, SAP Tutorial and Material, SAP Guides, SAP All Modules

◉ Start Routine: Can be written in Transformation.


Start Routine executes first when a DTP is executed and can be created in a transformation as shown in the below screenshot. Start Routine processes whole bunch of data at a time (DTP Package size).

SAP BW, SAP Live, SAP Tutorial and Material, SAP Guides, SAP All Modules

Purpose: Modify the source data (SOURCE_PACKAGE) i.e source DS, DSO or Cube etc. Before the data is reached  to target.

Example:

1. Deletion of record during data loads (Similar to DTP filter but Hard-Coded).
2. Populating Internal Table (Global) which can be used further.
3. Sorting of source data before sending to target.

◉ End Routine: Can be written in Transformation.


End Routine executes last when a DTP is executed and can be created in a transformation as shown in the below screenshot. End Routine processes whole bunch of data at a time (DTP Package size).

SAP BW, SAP Live, SAP Tutorial and Material, SAP Guides, SAP All Modules

Purpose: Modify the target data (RESULT_PACKAGE) i.e. target DSO or Cube etc i.e. once data is processed in transformation and ready to store in infoprovider at that time modification can be done.

Example:

1. Population/Modification of target fields.
2. Populating Internal Table which can be used again in End Routine.

If multiple fields (Target) need to be populated/modified then End Routine should be used as all the fields can be populated/modified at one go in that data package instead of Field Routine which will work on individual field at a time.

◉ Expert Routine: Can be written in Transformation.


Expert Routine is alternate approach to Start, Field & End Routine and can be created using below screenshot.

SAP BW, SAP Live, SAP Tutorial and Material, SAP Guides, SAP All Modules

After above screenshot, confirm the change. Once confirmed, below are the options.

SAP BW, SAP Live, SAP Tutorial and Material, SAP Guides, SAP All Modules

Purpose: To handle complex requirement when standard approach for Start Routine & End Routine does not satisfy. In this routine SOURCE_PACKAGE and RESULT_PACKAGE both can be used at the same time for data modification.

Example:

Transpose of data i.e. source has 12 records (Monthly) for a year and in target only single record is required with all the months as columns for that year.

◉ Field Routine: Can be written in Transformation.


Field Routine is written for single field in the transformation where complex modification/derivation is required. Can be created as shown in the below screen shot.

SAP BW, SAP Live, SAP Tutorial and Material, SAP Guides, SAP All Modules

Purpose: To modify/derive a individual field value.

Example:

1. Modification/Population of single field where the internal table is populated in Start Routine and read in Field Routine.

Note: If all the source fields are not mapped 1:1 in target then only Field Routine should be used else End Routine would be best option, as all the required fields are mapped and available in target for bulk processing.

2. Concatenate of 2 or more fields in to the single target field.
3. String modifications on the source fields and storing that in to target field.

Point (a) from example is explained:

Consider an example where Future Joiner’s Period, Quarter & Year needs to be calculated using Future Joiner field value as TRUE or FALSE. Below is the screenshot for the same.

SAP BW, SAP Live, SAP Tutorial and Material, SAP Guides, SAP All Modules

As mentioned, if the fields used in the logic/calculation are available in the target and then End Routine will be best option as only single if will suffice to calculate all the fields falling under same condition. In this scenario, End Routine will improve the performance of the transformation.

If Future Joiner and Start Date are not available in Target then Field Routine should be used.

SAP BW, SAP Live, SAP Tutorial and Material, SAP Guides, SAP All Modules

Monday 13 November 2017

PPDS on S/4HANA

Introduction to PPDS (Production Planning and Detailed Scheduling)

Production Planning and Detailed Scheduling(PPDS) on SAP S/4HANA is the product running on SAP S/4HANA.  Production planning and detailed scheduling is a tool in APO(Advanced Planning and Optimization) which takes care of planning and scheduling considering the capacities of various resources.

SAP S/4HANA, SAP Live, SAP Tutorial and Material, SAP Guides, SAP Modules

You can use SAP PPDS on SAP S/4HANA to:

1.Create planned orders for in-house production for all the requirements.

2.Optimize the result so that the resources are well consumed.

3. Plan with various variants and yet produce the optimized results.

SAP PPDS for S/4HANA 


SAP PPDS for S/4HANA  allows the user to use PPDS in a more simplified way where he doesn’t have to worry much about CIF.

Let’s have a look on what is the advanced feature in material master for S/4HANA with PPDS.

In the transaction MM01, there is a new view “Advanced planning” which would tell whether the material should be considered for planning immediately in PPDS.

It includes the data like planning procedure, horizon, plan explosion etc…

Similar way, work center also has a check box for ‘Advanced Planning’.

S/4HANA has made it simple for the master data CIF . PPDS for S/4HANA still uses only one system but the CIF of data to PPDS is much easier.

The user has the liberty to create the product in ERP completely and then CIF it to PPDS, ‘completely’ I mean all the details like planning procedure, plan explosion, lot size etc…

I use the term PPDS interchangeably with APO as for S/4HANA , PPDS means APO.

Pre-requisite 

Set the Advanced planning and Scheduling active.

This is done in SPRO.

Master data changes 

The material master in ERP system has one more additional view as said which is ‘Advanced Planning’ as shown below.

SAP S/4HANA, SAP Live, SAP Tutorial and Material, SAP Guides, SAP Modules

The ‘Advanced Planning’ view in the material has a checkbox ‘Advanced Planning’ , PPDS details like plan explosion, planning procedure, Demand details, Lot size details , GR /GI processing times, and Shelf life times.

All these details can be taken from ERP system and planned in PPDS.

The flag for ‘Advanced planning’ indicates that the material should be transferred to APO system for PPDS planning and this will be handled by the system itself. No additional effort to CIF the data. It means that there is no need to create an integration model for the material.

The screen below shows the Advanced Planning view in detail.

SAP S/4HANA, SAP Live, SAP Tutorial and Material, SAP Guides, SAP Modules

Along with the material, work centers are another set of master data which the PPDS would be using.

Work centers are the actual resources using which the production takes place in the shop floor.

There is a check box for ‘Advanced Planning’ in the Basic data tab of the work center in transaction CR01.

The flag for this check box indicates that the work center is immediately available for planning in SAP PPDS. Again, integration model need not be created for work centers.

The below screenshot shows the ‘Advanced Planning’ checkbox for a work center.

SAP S/4HANA, SAP Live, SAP Tutorial and Material, SAP Guides, SAP Modules

So, overall when you have your plants ready in PPDS system, you can create materials and work centers which would immediately be available in PPDS for planning.

CIF – PDS


Production data structure or PDS is the blue print of the material with all the details like BOM (Bill of Material), routing/recipe in APO. PDS is generated in the APO system using the production version of the material. Production version in turn has the details of BOM and Routing/recipe.

Now, the Production Data Structure needs to be integrated to APO system. This is not done in the traditional way i.e., using the transaction CFM1, rather we use the transaction /CURTOADV_CREATE.

The screenshot below shows /CURTOADV_CREATE screen.

SAP S/4HANA, SAP Live, SAP Tutorial and Material, SAP Guides, SAP Modules

Overall the introduction of PPDS on S/4HANA  would allow the user to modify, rebuild the master data structure and integrate it with the PPDS system easily with very minute changes.