I have several tables that loaded from Oracle into HANA.
Among these tables, one is a fact table, and the others are dimension tables.
I write a SQL query that does two things:
Join fact table and dimension tables
Calculate measures (such as min, sum, count) against fact table by grouping by two dimensions,
What is the correct way to do this:
Write the SQL and run the SQL directly without modeling
Creating Attribute View and Analytic View with the fact table and dimension tables, and then do query against the Analytic View.
I am new to HANA. Which one is preferred, and why?
Your question assumes that writing SQL queries is not data modelling. That's not correct. While the basic structures of SQL databases (e.g. tables and views) are pre-defined and "schema-on-write" what you make of those structures is completely up to your query design.
That's one of the strengths (if you will) of SQL: it allows you to change your point of view of the data and ask different questions. But the questions you ask are the models of your inquiry and your thinking.
In short: writing SQL is modeling..
Anyhow, I got what you meant by your question in the sense of "modelling" == "using the HANA graphical modeller"
.
The answer to your question is of course "it depends".
Graphical modelling is easy (for simple models), requires less typing and one can very easily expose those models to reporting tools and/or OData services.
SQL, on the other hand, is a very versatile tool for many data processing tasks and can quickly lead to results even in very complex scenarios.
Personally, I usually fiddle in SQL first and only use graphical models if I want to use specific features in them (e.g. anonymization functions in HANA 2 are not available in SQL).
Coming back to your example: if you want to reuse the dimension tables (especially when these are in fact not simple tables but complete join constructs on their own), I'd probably go with the graphical modelling approach first to establish the "cube"-structure. The queries against that cube structure would then be written as SQL.
Finally, the execution performance should be very similar in most cases. I would use whatever tool I feel is best suited for a specific modelling task before I pick one due to "better performance". The reason for that is that fully understanding your data model (both the schema design and your query) is the best option to improve performance once you established that the results are correct.