Showing posts with label SAP-HANA. Show all posts
Showing posts with label SAP-HANA. Show all posts

Friday, September 30, 2016

Code-to-Data approach with ABAP 7.4

Some benefits of new Open SQL:
  • Transparent Optimizations:
    • Fast data access
    • Optimizations of SELECT ... INTO ITAB and of SELECT SINGLE
  • A database oriented programming is supported better by the following:
    • Extended open sql, supporting more of SQL-92 standard
    • Advanced views definition capabilities
  • SAP HANA specific features:
    • Database procedures
    • Column views
New Open SQL

One of the reasons for using Native SQL was that the SQL-92 standard defines features that were not previously available in Open SQL. For example, Open SQL lacked expressions and had limited join types.

As of SAP NetWeaver 7.40 SPS5, the scenarios in which native SQL are necessary are reduced, because Open SQL has been extended.

New Open SQL syntax:

SELECT CARRIDCONNID
 FROM SBOOK
    INTO TABLE @LT_BOOKINGS
 WHERE CUSTOMID @LV_CUSTOMER.

SELECT CARRIDCONNIDFLDATEBOOKIDCUSTOMID,
    CASE SMOKER
      WHEN 'X' THEN
        FLOORLOCCURAM * @LC_SMOKER_FEE )
      ELSE CEILLOCCURAM * @LC_NONSM_DISC )
    END AS ADJUSTED_AMOUNT,
      CASTLOCCURAM  AS FLTP /
      CASTAS FLTP AS HALF,
  LOCCURKEY
 FROM SBOOK
    INTO TABLE @LT_BOOKINGS
 WHERE CUSTOMID @LV_CUSTOMER.
  • ABAP variables/constant escaped with "@"
  • Comma-separated column list
  • Arithmetic expressions using +, -, *,  /, DIV, MOD, ABS, FLOOR, CEIL, CAST
  • Common semantics
  • String concatenation using && operator
  • SQL CASE, COALESCE
  • Right Outer Join now supported
  • Support for UNION and UNION ALL of Open SQL. The result of UNION ALL can contain duplicates, whereas the UNION only contain the distinct.


Wednesday, September 14, 2016

Core Data Services in ABAP

Core Data Services (CDS) in ABAP allow defining semantically rich data model in DDL resource object. They are integrated into the ABAP infrastructure and allow to push more calculations into the database.

The CDS consists of the following three sub-languages addressing the different sub-domains of data modelling:

  • Data Definition Language (DDL): DDL part of CDS allows you to define semantically rich database tables and views (CDS entities) and user-defined type in database.
  • Query Language  (QL): Views defined using the CDS DDL can be consumed in ABAP program by using Open SQL in ABAP.
  • Data Control Language (DCL): DCL is used to define authorizations for CDS entities. 

CDS are not only an integral part of SAP HANA, but also are integrated into the ABAP dictionary and language. This new repository object types DDL and DCL for defining enhanced view entities in ABAP allow you to push more intensive data calculations into the database than is possible using the classic ABAP dictionary views.

You can use CDS views to read data and calculate data while reading the data. The CDS view does not support the data modification as of now.


To create the CDS view, do the following:
  • Use the Abap Development Tools. The ABAP workbench does not support the new type of ABAP repository objects called DLL Sources.
  • In the ABAP Project Tree, select the package to contain the CDS view and right click to choose menu entry New --> Other ABAP Repository Object --> Dictionary --> DDL Source.
  • Use the DDL Statement DEFINE VIEW and SQL-like syntax to define the view.

Tuesday, September 6, 2016

Migrating to SAP HANA

Considerations Before Migrating to SAP HANA

  • If ABAP code makes use of native SQL (for example, via ADBC or EXEC SQL) , this code must be checked and adapted to SQL dialect of SAP HANA.
Performance Considerations
  • In many situations, an existing code might run faster after migrating to SAP HANA. However, there can be situations in which existing code is negatively impacted. A first step in detecting code that would lead to performance issues when migrating to SAP HANA is to run a static code analysis.

Friday, August 5, 2016

Build First SAP HANA Model in 10 minutes

In the article SAP HANA Modeling Introduction we explained the basics of SAP HANA data modeling.
This article describes how to build a simple model using data stored on SAP HANA. 
By the end of this, you will have created tables, attribute views, and analytical views in SAP HANA.

Prerequisite:
You have SAP HANA Studio installed on your machine.

Add HANA System in HANA Studio

In the article Download and Install HANA Studio we explained how to download and install HANA Studio.
In this article we will explain how to add HANA system into HANA Studio

In order to connect to a SAP HANA system we need to know the Server Host ID and the Instance Number. Also we need a Username & Password combination to connect to the instance. The left side Navigator space shows all the HANA system added to the SAP HANA Studio.

Steps to add new HANA system:

  • Right click in the Navigator space and click on Add System 



  • Enter HANA system details, i.e. the Hostname & HANA Database Instance Number and click Next. (The port 3xx15 - xx is instance number - for jdbc is required to open for the connection)



  • Enter the database username & password to connect to the SAP HANA database. Click on Next and then Finish.
  • The SAP HANA system now appears in the Navigator.


  • Create new tables in SAP HANA and fill them with data:


    1. Open HANA Studio and expand the SAP HANA system.
    Go to your schema. Right-click on your schema and select SQL editor.
    SAP HANA Studio
    Note: In this example schema name is "SAP_HANA_TUTORIAL". In case you want to create a new schema use below query.
    create schema "schema_name";


  • Copy and paste the below script in SQL editor and execute. 

  • --REPLACE <YOUR SCHEMA> WITH YOUR SCHEMA NAME

    -- Create Product table
    create column table "<YOUR SCHEMA>"."PRODUCT"(
          "PRODUCT_ID" INTEGER null,
          "PRODUCT_NAME" VARCHAR (100) null default ''
    );
    insert into "<YOUR SCHEMA>"."PRODUCT" values(1,'Shirts');
    insert into "<YOUR SCHEMA>"."PRODUCT" values(2,'Jackets');
    insert into "<YOUR SCHEMA>"."PRODUCT" values(3,'Trousers');
    insert into "<YOUR SCHEMA>"."PRODUCT" values(4,'Coats');
    insert into "<YOUR SCHEMA>"."PRODUCT" values(5,'Purse');
    -- Create Region table
    create column table "<YOUR SCHEMA>"."REGION"(
          "REGION_ID" INTEGER null,
          "REGION_NAME" VARCHAR (100) null default '',
          "SUB_REGION_NAME" VARCHAR (100) null default ''
    );

    insert into "<YOUR SCHEMA>"."REGION" values(1,'Americas','North-America');
    insert into "<YOUR SCHEMA>"."REGION" values(2,'Americas','South-America');
    insert into "<YOUR SCHEMA>"."REGION" values(3,'Asia','India');
    insert into "<YOUR SCHEMA>"."REGION" values(4,'Asia','Japan');
    insert into "<YOUR SCHEMA>"."REGION" values(5,'Europe','Germany');

    -- Create Sales table
    create column table "<YOUR SCHEMA>"."SALES"(
          "REGION_ID" INTEGER null,
          "PRODUCT_ID" INTEGER null,
          "SALES_AMOUNT" DOUBLE null);

    insert into "<YOUR SCHEMA>"."SALES" values(1,1,100);
    insert into "<YOUR SCHEMA>"."SALES" values(1,2,90);
    insert into "<YOUR SCHEMA>"."SALES" values(1,5,85);
    insert into "<YOUR SCHEMA>"."SALES" values(2,2,80);
    insert into "<YOUR SCHEMA>"."SALES" values(2,1,75);
    insert into "<YOUR SCHEMA>"."SALES" values(3,3,85);
    insert into "<YOUR SCHEMA>"."SALES" values(4,4,75);
    insert into "<YOUR SCHEMA>"."SALES" values(5,1,65);
    insert into "<YOUR SCHEMA>"."SALES" values(5,2,65); 


  • After executing the scripts you should have 3 tables created. If there are no tables, try right-clicking on your schema and refreshing.
    SAP HANA Studio



  • Grant schema SELECT rights to _SYS_REPO user:

    Open the SQL editor of your schema and execute the following command line:
    GRANT SELECT ON SCHEMA <YOUR SCHEMA> TO _SYS_REPO WITH GRANT OPTION;
    If you miss this step, an error will occur when you activate your views later.
    Click here to know more about this.


    Create an attribute view:


    1. Open HANA Studio and make sure you are in Modeler perspective

    2. SAP HANA Studio
    3. Create a new package under the content folder.
      Right-click on the content folder and choose "New" -> "Package." Fill the fields "Name" and "Description" and click "OK."
      If you don't see the new package after this, try right-clicking on the content folder and refreshing.
    4. Right click on the package and choose "New Attribute View." Enter a name and a description and click "Finish"

      SAP HANA Attribute View 
    5. The attribute view will be opened in the HANA studio editor. Select "+" sign on "Data Foundation"

      SAP HANA Attribute View 
    6. Search for table "REGION" and select it.

      SAP HANA Attribute View 
    7. Now add the columns from the REGION table to output. Simply right click on the column and select "Add to Output". Add all 3 columns REGION_ID, REGION_NAME, SUB_REGION_NAME to the output.
      Once it is done, you will see the selected column in right side pane.

      SAP HANA Attribute View 
    8. Now select "Semantics". All 3 columns appears under Column pane as attributes.
      SAP HANA Attribute View 
    9. Now you need to define attributes and key attributes. Every attribute view must have at least one key attribute. 
    10. Click on the "Type" to select key attribute for REGION_ID.

      SAP HANA Studio 
    11. Click on the "Save and Activate" button on top right corner to activate the view.

      SAP HANA Studio 
    12. In the "Job log" Pane you will see an activation completed message.

      SAP HANA Studio 
    13. Attribute view is created and activated.
      To see the output of this view click on the "Data Preview" button on top right corner.
      SAP HANA Studio

      Then select "Raw Data" tab.
      SAP HANA Studio 

    Congratulation!! You have successfully created your first modeling view.

    Next step is to create an analytic view.


    Create an Analytic View:


    1. Right click on the package and choose "New Analytic View." Enter a name and a description and click "Finish"

      SAP HANA Analytic View 
    2. Click on "+" sign of "Data Foundation" and add table SALES.

      SAP HANA Analytic View 
    3. Right Click on the columns of table SALES and add REGION_ID and SALES_AMOUNT to output.

      SAP HANA Analytic View 
    4. Click on "+" sign of "Logical Join" and add attribute view "AT_Region" which was created earlier.

      SAP HANA Analytic View 
    5. Click on the REGION_ID from "Data Foundation" and connect it to the REGION_ID of attribute view AT_Region. In the properties pane select join type as left outer join and cardinality as n..1

      SAP HANA Analytic View 
    6. Select "Sementics". In the right side change the column type of SALES_AMOUNT as measure.

      SAP HANA Analytic View 
    7. Activate the analytic view similar to attribute view.
      Right-click on your analytic view and choose "Data Preview." After that, you can browse through the tabs named raw data, distinct values, and analysis.

      SAP HANA Analytic View

      SAP HANA Analytic View 

    Congratulation!! You have successfully created your first Analytic View. 


    Reference at http://saphanatutorial.com/build-your-first-sap-hana-model/

    Sap Hana Architecture

    SAP HANA Database is Main-Memory centric data management platform. SAP HANA Database runs on SUSE Linux Enterprises Server and builds on C++ Language.
    SAP HANA Database can be distributed to multiple machines.
    SAP HANA Advantages are as mentioned below
    • SAP HANA is useful as it's very fast due to all data loaded in-Memory and no need to load data from disk.
    • SAP HANA can be used for the purpose of OLAP (On-line analytic) and OLTP (On-Line Transaction) on a single database.

    SAP HANA Database consists of a set of in-memory processing engines. Calculation engine is main in-memory Processing engines in SAP HANA. It works with other processing engine like Relational database Engine (Row and Column engine), OLAP Engine, etc.
    Relational database table resides in column or row store.
    There are two storage types for SAP HANA table.
    1. Row type storage (For Row Table).
    2. Column type storage (For Column Table).
    Text data and Graph data resides in Text Engine and Graph Engine respectively. There are some more engines in SAP HANA Database. The data is allowed to store in these engines as long as enough space is available.

    SAP HANA Architecture

    Data is compressed by different compression techniques (e.g. dictionary encoding, run length encoding, sparse encoding, cluster encoding, indirect encoding) in SAP HANA Column store.
    When main memory limit is reached in SAP HANA, the whole database objects (table, view,etc.) that are not used will be unloaded from the main memory and saved into the disk.
    These objects names are defined by application semantic and reloaded into main memory from the disk when required again. Under normal circumstances SAP HANA database manages unloading and loading of data automatically.
    However, the user can load and unload data from individual table manually by selecting a table in SAP HANA studio in respective Schema- by right-clicking and selecting the option "Unload/Load".
    SAP HANA Server consists of
    1. Index Server
    2. Preprocessor Server
    3. Name Server
    4. Statistics Server
    5. XS Engine

    SAP HANA Architecture
    1. SAP HANA Index Server
    • Index server is the main SAP HANA database component
    • It contains the actual data stores and the engines for processing the data.
    • The index server processes incoming SQL or MDX statements in the context of authenticated sessions and transactions.
    Below is the architecture of Index Server.
    SAP HANA Index Server overview
    1. Session and Transaction Manager: Session Component manage sessions and connections for SAP HANA database. Transaction Manager coordinates and control transactions.
    2. SQL and MDX Processor: SQL Processor component queries data and send to them in query processing engine i.e. SQL/SQL Script / R / Calc Engine. MDX Processor queries and manipulates Multidimensional data (e,g. Analytic View in SAP HANA).
    3. SQL / SQL Script / R / Calc Engine: This Component executes SQL / SQL script and calculation data convert in calculation model.
    4. Repository: Repository maintain the versioning of SAP HANA metadata object e.g.(Attribute view, Analytic View, Stored procedure).
    5. Persistence layer: This layer uses in-built feature "Disaster Recovery" of SAP HANA database. Backup is saved in it as save points in the data volume.
    1. Preprocessor Server
    This server is used in Text Analysis and extracts data from a text when the search function is used.
    1. Name Server
    This Server contains all information about the system landscape. In distributed server, the name server contains information about each running component and location of data on the server. This server contains information about the server on which data exists.
    1. Statistic Server
    Statistic server is responsible for collecting the data related to status, resource allocation / consumption and performance of SAP HANA system.
    1. XS Server
    XS Server contains XS Engine. It allows external application and developers to use SAP HANA database via the XS Engine client. The external client application can use HTTP to transmit data via XS engine for HTTP server.

    SAP HANA Landscape

    "HANA" mean High Performance Analytic Appliance is a combination of hardware and software platform.
    • Due to change in computer architecture, the more powerful computer is available in terms of CPU, RAM, and Hard Disk.
    • SAP HANA is the solution for performance bottleneck, in which all data is stored in Main Memory and no need to frequently transfer data from disk I/O to main memory.
    Below are SAP HANA Innovation in the field of Hardware/Software.
    There are two types of Relational data stores in SAP HANA: Row Store and Column Store.
    Row Store
    • It is same as Traditional database e.g. (Oracle, SQL Server). The only difference is that all data is stored in row storage area in memory of SAP HANA, unlike a traditional database, where data is stored in Hard Drive.
    Column Store
    • Column store is the part of the SAP HANA database and manages data in columnar way in SAP HANA memory. Column tables are stored in Column store area. The Column store provides good performance for write operations and at the same time optimizes the read operation.
    Read and write operation performance optimized with below two data structure.
    Main Storage
    Main Storage contains the main part of data. In Main Storage, suitable data compression Method (Dictionary Encoding, Cluster Encoding, Sparse Encoding, Run Length encoding, etc.) is applied to compress data with the purpose to save memory and speed up searches.
    • In main storage write operations on compressed data will be costly, so write operation do not directly modify compressed data in main storage. Instead, all changes are written in a separate area in column storage known as "Delta Storage."
    • Delta storage is optimized for a write operation and uses normal compression. The write operations are not allowed on main storage but allowed on delta storage. Read operations are allowed on both storages.
    We can manually load data in Main memory by option "Load into Memory" and Unload data from Main memory by "Unload from Memory" option as shown below.
    Delta Storage
    Delta storage is used for a write operation and uses basic compression. All uncommitted modification in Column table data stored in delta storage.
    When we want to move these changes into Main Storage, then use "delta merge operation" from SAP HANA studio as below –
    • The purpose of delta merge operation is to move changes, which is collected in delta storage to main storage.
    • After performing Delta Merge operation on sap column table, the content of main storage is saved to disk and compression recalculated.
    Process of moving Data from Delta to Main Storage during delta merge
    There is a buffer store (L1-Delta) which is row storage. So in SAP HANA, column table acts like row store due to L1-delta.
    1. The user runs update / insert query on the table (Physical Operator is SQL statements.).
    2. Data first go to L1. When L1 moves data further (L1- Uncommitted data)
    3. Then data goes to L2-delta buffer, which is column oriented. (L2- Committed data)
    4. When L2-delta process is complete, data goes to Main storage.
    So, Column storage is both Write-optimized and Read-optimized due to L1-Delta and main storage respectively. L1-Delta contains all uncommitted data. Committed data moves to Main Store through L2-Delta. From main store data goes to the persistence layer (The arrow indicating here is a physical operator that send SQL Statement in Column Store). After Processing SQL Statement in Column store, data goes to the persistence layer.
    E.g. below is row-based table-
    Table data is stored on disk in linear format, so below is format how data is stored on disk for row and column table -
    In SAP HANA memory, this table is stored in Row Store on disk as format –
     Memory address
    And in Column, data is stored on disk as –
     Memory address
    Data is stored column-wise in the linear format on the disk. Data can be compressed by compress technique.
    So, Column store has an advantage of memory saving.

    SAP HANA Sizing

    Sizing is a term which is used to determine hardware requirement for SAP HANA system, such as RAM, Hard Disk and CPU, etc.
    The main important sizing component is the Memory, and the second important sizing component is CPU. The third main component is a disk, but sizing is completely dependent on Memory and CPU.
    In SAP HANA implementation, one of the critical tasks is to determine the right size of a server according to business requirement.
    SAP HANA DB differ in sizing with normal DBMS in terms of –
    • Main Memory Requirement for SAP HANA ( Memory sizing is determined by Metadata and Transaction data in SAP HANA)
    • CPU Requirement for SAP HANA (Forecast CPU is Estimated not accurate).
    • Disk Space Requirement for SAP HANA ( Is calculated for data persistence and for logging data)
    The Application server CPU and application server memory remain unchanged.
    For sizing calculation SAP has provided various guidelines and method to calculate correct size.
    We can use below method-
    1. Sizing using ABAP report.
    2. Sizing using DB Script.
    3. Sizing using Quicksizer Tool.
    By using Quicksizer tool, Requirement will be displayed in below format-

    Thursday, August 4, 2016

    Key technologies in SAP HANA

    What is in-memory database?

    An in-memory database means all the data is stored in the memory (RAM). This is no time wasted in loading the data from hard-disk to RAM or while processing keeping some data in RAM and temporary some data on disk. Everything is in-memory all the time, which gives the CPUs quick access to data for processing.
    The speed advantages offered by this RAM storage system are further accelerated by the use of multi-core CPUs, and multiple CPUs per board, and multiple boards per server appliance.

    Key hardware technology Innovations in SAP HANA
    Multi-core architecture (8 CPU x 10 cores per blade). Massive parallel scaling with many blades.
    Ram Memory space is extended - 2 TB of memory - with dramatic decline in price.

    Key software technology innovations in SAP HANA

    To understand the impact SAP HANA has on the ABAPer, it can be helpful to revisit some of the key innovations introduced with the in-memory database technology.
    i1_in_memory.png
    The first important innovation is (I am sure that you have heard about it) that SAP HANA is capable of storing all relevant business data in main memory. That does not mean that SAP HANA does not support storing data on disk. It does mean that SAP HANA (typically) does not need to access the disk during query execution.

    i2_multi_core.png

    SAP HANA supports multi-core architectures by distributing queries across multiple CPU cores (and across multiple server nodes). Applications running on SAP HANA hence can benefit from massive parallelization.

    i3_column_store.pngWithin SAP HANA data can be organized either in row store (like in ‘traditional’ databases) or in column store. While both stores have specific advantages and disadvantages, the bulk of business data resides in the column store where it can be quickly searched and aggregated.

    i4_compression.pngSAP HANA can compress business data (in row store). This not only reduces the amount of main memory needed, but also the amount of data to be transferred between main memory and CPU. The latter is important to handle the bottleneck of in-memory database technology: CPU waiting for data to be loaded from main memory into cache (in contrast to disk I/O which used to be the bottleneck in the past).

    i5_partitioning.png
    Last not least, I want to mention SAP HANA’s capability to partition datasets. Partitioning is of particular interest for large database tables. It supports the parallelization of queries.
    For example, if a database table is spread across multiple partitions (on one server node), the aggregation of one column can be done in two steps. In the first step all partitions are aggregated simultaneously. In a second step the results of the first step are added up. This makes use of parallelization to avoid the CPU idle time.

    A new paradigm for ABAP emerges: “code-2-data”

    To leverage the outlined innovations introduced with SAP HANA from ABAP, the database access becomes very important. And it is crucial – not so say compulsory – for ABAP applications to move data intense operations (i.e. costly calculations on large datasets) to the database layer. This is in some respects a paradigm shift, which is often referred to as “code pushdown” or “code-2-data” approach (in contrast to the “data-2-code” approach ABAP programs followed in the past).
    code_pushdown.png
    The above diagram illustrates the two approaches: the “data-2-code” approach on the left and the “code-2-data” approach on the right.

    What is the difference between the two?

    In the past (based on traditional database technology) ABAP applications considered the database to be the bottleneck. Hence the complete business logic – including costly calculations – was implemented on the application layer. Very often a large number of records was transferred from the database to the application server to calculate only a few results there (for example to aggregate many line items to only a handful of key figures).

    Now and in the future (based on SAP HANA) the database is not the bottleneck anymore. ABAP applications need to move – at least – parts of the business logic to the database layer. This not only reduces the amount of data to be transferred between database and application server, but it also ensures that the part of the business logic moved to the database (implicitly) benefits from the innovations built into SAP HANA.

    However – especially when looking at existing ABAP programs – rewriting the business logic completely in the database might not be reasonable due to the efforts involved. Instead the “code pushdown” should mainly focus on costly calculations on large datasets. Orchestration, process and display logic should stay on the AS ABAP.

    What does all that mean for the ABAPer?

    The new capabilities of SAP HANA offers a huge variety of opportunities for ABAP developers:

    • Accelerate: by optimizing the code, the run-time for background jobs can be reduced.
    • Extend: turn background jobs into interactive applications.
    • Innovate: With SAP HANA's analysis and calculation capabilities, ABAP developer can design new applications that would not have been possible in the past.
    The arrival of in-memory technology on the database layer also requires a change in the way we design and implement applications.


    SAP giới thiệu mã hỗ trợ AI trong ngôn ngữ ABAP riêng của mình

    SAP đã ra mắt một loạt tính năng mã hỗ trợ AI trong môi trường phát triển ứng dụng dựa trên đám mây của mình, đồng thời tham gia vào danh sá...