• Oracle Analytics 105.4 Release

    Oacle announced the Analytcs 105.4 are now available.

    Annouce on Oracle Blog:

    The latest updates to Oracle Analytics Cloud provides new functionality in two primary categories that will help you meet the breadth of all your business analytics needs.

    We released the 105.4 update with Augmented Self-Service (so everyone in your organization can get the right data at the right time) and deeper Integration with all the common big data sources with built-in security and governance (so you can easily share insights and collaborate with your colleagues). The following blog is designed to help you navigate this release and achieve faster time to insights.

    Augmented Self-Service

    Some popular requests have been implemented in this release. These include copying and pasting visualization objects between dashboard projects, making it as easy as Ctrl-C and Ctrl-V. There are also new formatting options, allowing for customization of fonts and backgrounds for all visualization objects. Productivity and efficiency are the key drivers in the updates included for augmented self-service.

    Under Navigation and search, the most obvious update is the unified homepage for both Answers and Dashboards content as well as Data Visualization content. Users no longer need to go to different URLs to locate their different object types.

    Oracle Analytics 105.4

    In the natural language search bar, users can search any object (datasets, dataflows, reports, visualization projects etc.) or create new queries.

    Oracle Analytics 105.4

    Oracle Analytics 105.4

    Additionally, the natural language search capability has been enhanced to better understand variants in words, including plurals and autocorrect even when you “fat-finger” the entry. It’s also easier for users consuming dashboards to quickly move from the dashboard to a visualization project with 1-click when they spot a cause for investigation.

    Data ingestion and preparation has improved to better recognize numeric values ingested. It better understands alternate separator characters and decimals which is critical for working with international datasets. For example, some European countries use a comma as a decimal separator and the point as the thousands separator (€5.876,03) versus the alternate method used ($5,876.03). This causes problems if not considered in the ingestion process.

    There are new built-in steps that can be leveraged in the data flows section, like “Split Columns” and new column functions like “Trim,” reducing the need for custom expressions.

    Data flows can now be imported and exported allowing portability of the metadata in smaller bundles with or without the credentials, data, and access control lists.

    Visualization and spatial capabilities are now deeper and more robust. Combining the value of machine learning and statistical functions is key to innovating with analytics. This release extends forecasting to be available when you right-click on any generic time level in a visualization. In addition, improvements have been made in how the product works with map layers. The location match allows you to check whether all the records in your data are being paired with points on your map and act to address mismatches. When working with RPD sources, it is now possible to recognize geometry data types and push the execution of powerful geometric calculations down to the database for processing.

    Oracle Analytics 105.4

    Oracle Analytics 105.4

    The second area of focus for this release is integration. This unified platform has competitive and quantifiable advantages because it provides every business with the tools and capabilities they need, whereas a multi-tool strategy increases architectural complexity and cost. Oracle is a leader in data security; it is our heritage as the undisputed leader in database for many years. This heritage and knowledge boost the capabilities of this Oracle Analytics Cloud release.

    Connectivity and security are some of the most complex aspects related to data security. In this 105.4 release, support for Kerberos Authentication to many of the common big data sources (like Apache Hive, Pivotal HD Hive, MapR Hive, Hortonworks Hive, IBM Big Insights Hive, and Impala) and SSL connectivity from Oracle Analytics Cloud to your data sources (like Spark, DB2, SQL Server, and all the sources mentioned previously) has been added.

    Oracle Analytics 105.4
    Oracle also continues to expand the range of data sources to leverage connectivity on local systems. Connecting to data via the Remote Data Gateway avoids complex setups that require IT and the reuse of any existing secure connectivity pathways. Other enhancements include better enterprise performance management (EPM) integration via the ADM driver and support for EPM Cloud 19.08 and later. Additional properties and columns are accessible via the RPD and the performance has been improved.

    Management and administration simplification has introduced updates to My Services Dashboard to provide a more unified experience. The redesigned page makes it easier to register safe domains.

    Storage options have been updated to include object store, allowing the object store to be used as an external storage option for Oracle Analytics Cloud data replication. When replicating data from Oracle Fusion Applications (SaaS), data can be loaded from a bucket in OCI Object Store or OCI-C.

    Oracle Analytics 105.4

    Oracle Analytics 105.4

    For a complete list of exactly what’s new, check out the release notes on the Oracle Help Center page for Oracle Analytics Cloud. Find out what bugs we fixed with the 105.4 release, check out the notes on the “My Oracle Support” page.

    To learn how you can benefit from Oracle Analytics, visit Oracle.com/analytics, and don’t forget to subscribe to the Oracle Analytics Advantage blog and get the latest posts sent to your inbox.

    Source: Oracle Blog

  • What is Augmented Analytics

    In July 2017, the concept of Augmented Analytics was first introduced in a research published for Gartner. The topic immediately aroused interest and immediately became an impact trend in the field of Business Intelligence, so much so that just one year later Gartner published the report “Hype Cycle for Analytics and Business Intelligence“.

    In its first definition, the concept of Augmented Analytics is described as a new data analysis approach that exploits Machine Learning and natural language (NLG) technologies, in order to automatically identify the most relevant results and independently suggest the concrete actions to be taken. Not bad as a goal …

    A new approach

    In fact, one of the main current problems is the fact that data tend to become increasingly numerous and increasingly complex. Often it is not possible to integrate them quickly also because of their extreme complexity. The risk of losing information becomes therefore very high, and precisely this complexity does not allow us to explore all the possible opportunities offered by the information available to us. It also increases the likelihood of having as a result just what we “looked for” and not what we “could find”.

    We add the fact that, to date, all data extraction and transformation activities are still completely manual (“Artisanal” area of the figure shown below), considerably increasing the possibility of error. Furthermore, finding a data scientist today, a relatively recent profession that requires skills in information technology, statistics and economics, and for which there are still few studies, is really very difficult.

    It therefore seems natural that the introduction of analytical tools capable of interacting with the human being in its natural language and autonomously identifying the most significant data without the mediation of analysts and without human intervention in general, becomes a fundamental key for the future of Business Intelligence.

    The traditional approach

    He is currently approaching traditional projects involving various figures such as analysts, computer scientists, data scientists and managers, involved in the usual decision-making activities that will then lead to the final request. This is followed by various data extraction and cleaning activities performed by computer scientists or technicians with specific skills. We will try to put together these data by linking them together and only then transforming them into information that is more synthetic and “decisive”.

    As we can easily imagine the costs and times of such an activity are absolutely remarkable.

    The Augmented Analytics approach

    With this new approach we try to centralize all the procedures in a single solution, from data collection to analysis processing, to monitoring results. Machine Learning and Artificial Intelligence algorithms are used to automate the procedures making data analysis easier. Billions of data combinations are automatically analyzed, automatically finding correlations and identifying any predictive scenarios.

    The future of Augmented Analytics

    According to the slide (source Oracle Corporation), there are several levels:

    Level 0 – Artisanal: everything is handmade, as in the classic approach. The data model and reporting are the responsibility of IT.

    Level 1 – Self Service: data management is still largely manual, but human interaction with data will be done with the natural language (Natural Language Query). Visualizations and graphs will be suggested based on the data we are querying.

    Level 2 – Deeper Insights: the first phases of advanced data management are displayed (recommended sources, joins, crowdsourcing suggestions, intelligent cataloging) and increased navigation helps to discover information that would otherwise require a strong human effort.

    Level 3 – Data Foundation: data management is increased, corrections and enrichments are automatically identified. New views and new data sets are added.

    Level 4 – Collective Intelligence: the system learns metrics and KPIs that alert you when they require your attention. You have both company KPIs and system KPIs. Insights become pervasive, commercial intent passes from an idea to a reality, results are expected, actions are recommended, but humans continue to act.

    Level 5 – Autonomous: everything becomes really guided by data, with the best subsequent actions performed on the basis of forecasts, insights and intents. The system is the engine of change.

    Where are we at?

    I believe that for the moment we have stabilized between the third and the fourth stage, data management starts to be really “augmented”, it is becoming easier to integrate and manage a large amount of information and take the first steps towards automatic identification of KPIs.

    Completing the fourth stage and finally reaching the fifth, the “autonomous” total, will be the challenge of the coming years. The way of conceiving Business Intelligence will change drastically, but the challenges to be faced are always more beautiful. We are ready?


    In the analytics business and in the business world — business intelligence, analytical devices and BI applications are the main discussed subject. BI is quicker and progressively precise in reporting; data analysis is increasingly vigorous, and forecasting has changed essentially and developed massively. An ever-increasing number of companies have begun to value the worth of business insight (BI) with regards to their decision-making process. The previous couple of years have seen BI systems experience various advancements. With the rising multifaceted nature of the business intelligence environment, the distinguishing proof of trends and market improvements is a key factor in powerful decision-making. It is progressively imperative to utilize the most recent innovations and methodologies so as to adapt to digitalization and market competition. Let’s look at top BI trends that will dominate 2020.


  • SQL Server: how to move tempdb database

    SQL Server doesn’t support moving TempDB Database using backup/restore or by using detach database methods. In this article I explain the steps you must follow to move TempDB database from one drive to another in SQL Server. However, for the changes to come into effect you must restart SQL Server Service.

    Start SQL Server Management Studio and execute the below query:

    USE master;

    MODIFY FILE (NAME = tempdev, FILENAME = ‘C:\yourpath\tempdb.mdf’);

    MODIFY FILE (NAME = templog, FILENAME = ‘T:\yourpath\templog.ldf’);

    Remember to replace yourpath with your preferred path.

    Once query is executed, SQL Server give you a message to remember you to Stop and restart the instance of SQL Server for the changes to come into effect.

    After restarted, verify the changes are completed:

    Use master

    name AS [LogicalName]
    ,physical_name AS [Location]
    ,state_desc AS [Status]
    FROM sys.master_files
    WHERE database_id = DB_ID(N’tempdb’);

    Now, you can delete the old tempdb files.

  • Reports fail with Logical drill graph level-‘to’ does not exist Error After Upgrade to


    Business Intelligence Suite Enterprise Edition – Version and later
    Information in this document applies to any platform.


    After an upgrade from to, certain reports – those based on the ABC subject area – now fail and throw the error below.  It was further determined that a newly created report with just the Customer > Customer No presentation column would also throw the error.

    View Display Error
    Logical drill graph level-‘to’: ‘####’ – does not exist. SQL: ‘{call NQSGetLogicalDrillGraph(‘ABC’,’%’,’####’,’%’)}’
    Error Details
    Error Codes: KZDX4O6Q
    Location: saw.views.evc.activate, saw.subsystem.portal.pagesImpl, saw.subsystem.portal, saw.httpserver.processrequest, saw.rpc.server.responder, saw.rpc.server, saw.rpc.server.handleConnection, saw.rpc.server.dispatch, saw.threadpool.socketrpcserver, saw.threads

    A Global Consistency Check of the RPD returned no Errors, although there were many Warnings like the following reported against the ABC Business Model:-

    Business Model ABC:
    [38100] The logical tables ‘”ABC”.”Loan”‘ and ‘”ABC”.”Fact – Activity”‘ connected by logical join ‘”Relationship_####:####”‘ have no corresponding physical connections.


    This issue was caused by an “incorrect” / “invalid” configuration in the dimension hierarchy.

    In the BI Admin Tool, open the RPD and go to the ABC > Customer > Customer No presentation column … Query related objects > Logical Column … see the Total level appears to have been duplicated in the Customer Dim dimension hierarchy:-


    In the RPD > BMM Layer > Dimension Hierarchy … delete the invalid / duplicated level (Total#1)


    NOTE:1946146.1 – Getting An Error While Adding A New View To The Analysis, Error: “Logical drill graph level- …. does not exist “

  • [nQSError: 14025] No fact table exists at the requested level of detail


    Business Intelligence Suite Enterprise Edition – Version and later
    Information in this document applies to any platform.



    Issue Seen : After Upgrading from OBIEE 10g to 11g the report throws the following error :

    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 14025] No fact table exists at the requested level of detail: [,,[AGENCY.Agency],[POPULATION.Population],,,,,,,,,,,,,,,,,,,,,,,,,,,,]. [nQSError: 14081] You may be able to evaluate this query if you remove one of the following column references: AGENCY.Agency, POPULATION.Population (HY000)

    The same reports are working fine in 10g.


    The error was caused because of incorrect metadata design in rpd. The fact source “Case” was not recognized as a fact source by obi server because of incorrectly defined joins in the BMM layer. The many side of the join was pointing to dimension tables instead of the fact table and because of this Case Logical Table was not recognized as a fact source.

    As per rpd design best practices the join definitions in BMM layer has to be correctly defined between the fact and dimension tables .


    The solution is to recreate the joins in BMM layer and make sure that the many side of the join points to the Fact source.
    Once this change was done and the modified rpd was uploaded, the report did not throw error and displayed results fine.

  • Data Scientist: Who he is and what he does

    We've all heard a bit about it now: the Data Scientist, for a few years now, is one of the most sought-after pre-treatment figures. Who exactly is he and what exactly does he do? Here are some answers we can find on the web:

    What does the Data Scientist do?

    A bit statistical, a bit itist, a bit of an economist, but also a marketing expert and a communication enthusiast. He is the data scientist, the job that the Harvard Business Review defined as the "sexiest profession of the 21st century" in 2013. Is he a required professional figure in Italy? And how do you become a data scientist?

    According to Claudio Sartori, Scientific Director of the New Master in Data Science of Bologna Business School in an interview reported on sarce.it it is a figure that "requires multidisciplinary skills, because it must not only select, analyze and interpret an ever-widening and complex body of data, but also find the best way to make the processing and results available to the structure for which it works, whether it is a company or a public administration."

    The data scientist must first put order in the data, then wonder where his organization wants to go, what information can be useful for his strategy. Finally, it must know how to do but also know how to communicate, make available to management the results of what it has done. The most sophisticated analyses are only useful if they are properly transmitted to those who have to make the decisions, then understood and used to achieve the desired results." []

    In short, the primary task of a data scientist is to explore the data, starting with precise business demands. He is a real investigator and puts all his analytical creativity on the field. Armed with technological tools and machine learning algorithms, he is able to scientifically examine and predict correlations between phenomena that at first analysis are invisible. Its goal is to get insights as accurate as they can give the business an accurate overview of the problem to be solved.

    Who is the Data Scientist?

    An attempt was made to profile the Data Scientist in the Italian company, the results were reported in the report "The new professionalism and skills for the management of big data", prepared by the Digital Innovation Observers of the Polytechnic University of Milan:

    Work within the IT division, an ad hoc business function, or within one of the pre-existing functions. He is a graduate, mostly master's degree, who has often taken training courses in statistics and computer science. He has a background of skills focused primarily on machine learning, analytics and Knowledge Deployment. And he earns on average about 67,000 euros a year, with a bonus that generally stands at 10% of the salary.

    It is clear that a strong heterogeneity of skills is essential, from business to programming to technology and especially for the trio of Machine Learning, Analytics and Knowledge Deployment. From a training point of view, it boasts a course of study concluded in the majority of cases (50%) with a master's degree (engineers, economics and computer science go for the most) and accompanied by specialization courses (statistics, computer science and management the most chosen).

    The Data Scientist in Italy

    Data Scientist is now present in 3 out of 10 companies, but the number of specialists employed full-time is growing at an annual rate of 57%. A sign of a growing corporate sensitivity to the new challenges of the big data boom.

    According to data from Robert Half's Technology and IT Salary Guide for 2018, a data scientist's average salary can range, depending on experience, between 100,000 and 168 thousand dollars per year.

    Who needs a Data Scientist?

    Each sector has its own wealth of data to analyze. Businesses need to analyze their data to make decisions about efficiency, inventory, manufacturing errors, customer loyalty, and so on. In the e-commerce sector, recognizing trends to improve the proposal to the customer, in the financial situation the data of transactions are fundamental assets, but also in the Public Administration you can monitor the general satisfaction of citizens. Health, communication and social networking are other areas where the needs are evident.

    Il Sole 24 Ore also reports that in Italy less than one in three large Italian companies has noticed the need to have a Data Scientist inside. Among Italian SMEs, only 34% of these are in a budget dedicated to Analytics.


    This figure is referred to as a professional able to range between technical, computer, economic and statistical skills. It's a kind of evolution from the Business Intelligence figure to the Data Scientist. Whereas in the first case you tend to collect requests from the business and then return numbers in output, in the second case it is the Data Scientist who "proposes solutions" starting from a mostly generic input of the business. It collects information on its own and analyzes its correlations, creating new algorithms and applying machine learning techniques.