Oacle announced the Analytcs 105.4 are now available.
Annouce on Oracle Blog:
The latest updates to Oracle Analytics Cloud provides new functionality in two primary categories that will help you meet the breadth of all your business analytics needs.
We released the 105.4 update with Augmented Self-Service (so everyone in your organization can get the right data at the right time) and deeper Integration with all the common big data sources with built-in security and governance (so you can easily share insights and collaborate with your colleagues). The following blog is designed to help you navigate this release and achieve faster time to insights.
Some popular requests have been implemented in this release. These include copying and pasting visualization objects between dashboard projects, making it as easy as Ctrl-C and Ctrl-V. There are also new formatting options, allowing for customization of fonts and backgrounds for all visualization objects. Productivity and efficiency are the key drivers in the updates included for augmented self-service.
Under Navigation and search, the most obvious update is the unified homepage for both Answers and Dashboards content as well as Data Visualization content. Users no longer need to go to different URLs to locate their different object types.
In the natural language search bar, users can search any object (datasets, dataflows, reports, visualization projects etc.) or create new queries.
Additionally, the natural language search capability has been enhanced to better understand variants in words, including plurals and autocorrect even when you “fat-finger” the entry. It’s also easier for users consuming dashboards to quickly move from the dashboard to a visualization project with 1-click when they spot a cause for investigation.
Data ingestion and preparation has improved to better recognize numeric values ingested. It better understands alternate separator characters and decimals which is critical for working with international datasets. For example, some European countries use a comma as a decimal separator and the point as the thousands separator (€5.876,03) versus the alternate method used ($5,876.03). This causes problems if not considered in the ingestion process.
There are new built-in steps that can be leveraged in the data flows section, like “Split Columns” and new column functions like “Trim,” reducing the need for custom expressions.
Data flows can now be imported and exported allowing portability of the metadata in smaller bundles with or without the credentials, data, and access control lists.
Visualization and spatial capabilities are now deeper and more robust. Combining the value of machine learning and statistical functions is key to innovating with analytics. This release extends forecasting to be available when you right-click on any generic time level in a visualization. In addition, improvements have been made in how the product works with map layers. The location match allows you to check whether all the records in your data are being paired with points on your map and act to address mismatches. When working with RPD sources, it is now possible to recognize geometry data types and push the execution of powerful geometric calculations down to the database for processing.
The second area of focus for this release is integration. This unified platform has competitive and quantifiable advantages because it provides every business with the tools and capabilities they need, whereas a multi-tool strategy increases architectural complexity and cost. Oracle is a leader in data security; it is our heritage as the undisputed leader in database for many years. This heritage and knowledge boost the capabilities of this Oracle Analytics Cloud release.
Connectivity and security are some of the most complex aspects related to data security. In this 105.4 release, support for Kerberos Authentication to many of the common big data sources (like Apache Hive, Pivotal HD Hive, MapR Hive, Hortonworks Hive, IBM Big Insights Hive, and Impala) and SSL connectivity from Oracle Analytics Cloud to your data sources (like Spark, DB2, SQL Server, and all the sources mentioned previously) has been added.
Oracle also continues to expand the range of data sources to leverage connectivity on local systems. Connecting to data via the Remote Data Gateway avoids complex setups that require IT and the reuse of any existing secure connectivity pathways. Other enhancements include better enterprise performance management (EPM) integration via the ADM driver and support for EPM Cloud 19.08 and later. Additional properties and columns are accessible via the RPD and the performance has been improved.
Management and administration simplification has introduced updates to My Services Dashboard to provide a more unified experience. The redesigned page makes it easier to register safe domains.
Storage options have been updated to include object store, allowing the object store to be used as an external storage option for Oracle Analytics Cloud data replication. When replicating data from Oracle Fusion Applications (SaaS), data can be loaded from a bucket in OCI Object Store or OCI-C.
For a complete list of exactly what’s new, check out the release notes on the Oracle Help Center page for Oracle Analytics Cloud. Find out what bugs we fixed with the 105.4 release, check out the notes on the “My Oracle Support” page.
To learn how you can benefit from Oracle Analytics, visit Oracle.com/analytics, and don’t forget to subscribe to the Oracle Analytics Advantage blog and get the latest posts sent to your inbox.
Source: Oracle Blog
In July 2017, the concept of Augmented Analytics was first introduced in a research published for Gartner. The topic immediately aroused interest and immediately became an impact trend in the field of Business Intelligence, so much so that just one year later Gartner published the report “Hype Cycle for Analytics and Business Intelligence“.
In its first definition, the concept of Augmented Analytics is described as a new data analysis approach that exploits Machine Learning and natural language (NLG) technologies, in order to automatically identify the most relevant results and independently suggest the concrete actions to be taken. Not bad as a goal …
In fact, one of the main current problems is the fact that data tend to become increasingly numerous and increasingly complex. Often it is not possible to integrate them quickly also because of their extreme complexity. The risk of losing information becomes therefore very high, and precisely this complexity does not allow us to explore all the possible opportunities offered by the information available to us. It also increases the likelihood of having as a result just what we “looked for” and not what we “could find”.
We add the fact that, to date, all data extraction and transformation activities are still completely manual (“Artisanal” area of the figure shown below), considerably increasing the possibility of error. Furthermore, finding a data scientist today, a relatively recent profession that requires skills in information technology, statistics and economics, and for which there are still few studies, is really very difficult.
It therefore seems natural that the introduction of analytical tools capable of interacting with the human being in its natural language and autonomously identifying the most significant data without the mediation of analysts and without human intervention in general, becomes a fundamental key for the future of Business Intelligence.
He is currently approaching traditional projects involving various figures such as analysts, computer scientists, data scientists and managers, involved in the usual decision-making activities that will then lead to the final request. This is followed by various data extraction and cleaning activities performed by computer scientists or technicians with specific skills. We will try to put together these data by linking them together and only then transforming them into information that is more synthetic and “decisive”.
As we can easily imagine the costs and times of such an activity are absolutely remarkable.
With this new approach we try to centralize all the procedures in a single solution, from data collection to analysis processing, to monitoring results. Machine Learning and Artificial Intelligence algorithms are used to automate the procedures making data analysis easier. Billions of data combinations are automatically analyzed, automatically finding correlations and identifying any predictive scenarios.
According to the slide (source Oracle Corporation), there are several levels:
Level 0 – Artisanal: everything is handmade, as in the classic approach. The data model and reporting are the responsibility of IT.
Level 1 – Self Service: data management is still largely manual, but human interaction with data will be done with the natural language (Natural Language Query). Visualizations and graphs will be suggested based on the data we are querying.
Level 2 – Deeper Insights: the first phases of advanced data management are displayed (recommended sources, joins, crowdsourcing suggestions, intelligent cataloging) and increased navigation helps to discover information that would otherwise require a strong human effort.
Level 3 – Data Foundation: data management is increased, corrections and enrichments are automatically identified. New views and new data sets are added.
Level 4 – Collective Intelligence: the system learns metrics and KPIs that alert you when they require your attention. You have both company KPIs and system KPIs. Insights become pervasive, commercial intent passes from an idea to a reality, results are expected, actions are recommended, but humans continue to act.
Level 5 – Autonomous: everything becomes really guided by data, with the best subsequent actions performed on the basis of forecasts, insights and intents. The system is the engine of change.
I believe that for the moment we have stabilized between the third and the fourth stage, data management starts to be really “augmented”, it is becoming easier to integrate and manage a large amount of information and take the first steps towards automatic identification of KPIs.
Completing the fourth stage and finally reaching the fifth, the “autonomous” total, will be the challenge of the coming years. The way of conceiving Business Intelligence will change drastically, but the challenges to be faced are always more beautiful. We are ready?
All modern enterprises are on a quest to identify signals captured within data (created inside and outside the organization) and turn them into actionable insights. For many, it’s table-stakes in an increasingly digital world. For others, it seems like the search for the Holy Grail.
Data is everywhere. It comes from our environment, our business, even ourselves. Data connects us. But access to data alone is not enough. Start the journey to understand your data with machine learning-based insights.
In the analytics business and in the business world — business intelligence, analytical devices and BI applications are the main discussed subject. BI is quicker and progressively precise in reporting; data analysis is increasingly vigorous, and forecasting has changed essentially and developed massively. An ever-increasing number of companies have begun to value the worth of business insight (BI) with regards to their decision-making process. The previous couple of years have seen BI systems experience various advancements. With the rising multifaceted nature of the business intelligence environment, the distinguishing proof of trends and market improvements is a key factor in powerful decision-making. It is progressively imperative to utilize the most recent innovations and methodologies so as to adapt to digitalization and market competition. Let’s look at top BI trends that will dominate 2020.
SQL Server doesn’t support moving TempDB Database using backup/restore or by using detach database methods. In this article I explain the steps you must follow to move TempDB database from one drive to another in SQL Server. However, for the changes to come into effect you must restart SQL Server Service.
Start SQL Server Management Studio and execute the below query:
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = ‘C:\yourpath\tempdb.mdf’);
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = ‘T:\yourpath\templog.ldf’);
Remember to replace yourpath with your preferred path.
Once query is executed, SQL Server give you a message to remember you to Stop and restart the instance of SQL Server for the changes to come into effect.
After restarted, verify the changes are completed:
name AS [LogicalName]
,physical_name AS [Location]
,state_desc AS [Status]
WHERE database_id = DB_ID(N’tempdb’);
Now, you can delete the old tempdb files.
Business Intelligence Suite Enterprise Edition – Version 220.127.116.11.0 and later
Information in this document applies to any platform.
After an upgrade from 18.104.22.168.140715 to 22.214.171.124.0, certain reports – those based on the ABC subject area – now fail and throw the error below. It was further determined that a newly created report with just the Customer > Customer No presentation column would also throw the error.
A Global Consistency Check of the 126.96.36.199.0 RPD returned no Errors, although there were many Warnings like the following reported against the ABC Business Model:-
This issue was caused by an “incorrect” / “invalid” configuration in the dimension hierarchy.
In the BI Admin Tool, open the RPD and go to the ABC > Customer > Customer No presentation column … Query related objects > Logical Column … see the Total level appears to have been duplicated in the Customer Dim dimension hierarchy:-
In the RPD > BMM Layer > Dimension Hierarchy … delete the invalid / duplicated level (Total#1)
NOTE:1946146.1 – Getting An Error While Adding A New View To The Analysis, Error: “Logical drill graph level- …. does not exist “
Business Intelligence Suite Enterprise Edition – Version 188.8.131.52.170418 and later
Information in this document applies to any platform.
Issue Seen : After Upgrading from OBIEE 10g to 11g the report throws the following error :
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 14025] No fact table exists at the requested level of detail: [,,[AGENCY.Agency],[POPULATION.Population],,,,,,,,,,,,,,,,,,,,,,,,,,,,]. [nQSError: 14081] You may be able to evaluate this query if you remove one of the following column references: AGENCY.Agency, POPULATION.Population (HY000)
The same reports are working fine in 10g.
The error was caused because of incorrect metadata design in rpd. The fact source “Case” was not recognized as a fact source by obi server because of incorrectly defined joins in the BMM layer. The many side of the join was pointing to dimension tables instead of the fact table and because of this Case Logical Table was not recognized as a fact source.
As per rpd design best practices the join definitions in BMM layer has to be correctly defined between the fact and dimension tables .
The solution is to recreate the joins in BMM layer and make sure that the many side of the join points to the Fact source.
Once this change was done and the modified rpd was uploaded, the report did not throw error and displayed results fine.