Big Data: Business Opportunities, Requirements and Approach
The term big data draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data, however, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information.
This presentation and discussion will help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships.
Exadata Patching Demystified
One of the most commonly heard complaints surrounding the Exadata platform is the complexity of the patching process. This session will delve into the world of Exadata patching to explain the various types of patches, how often patches should be applied, and how patching on the Exadata platform has been made less complex over the last 2 years. It will also provide a repeatable method that has a proven track record of ensuring that patches are successfully applied, culled from real-world experiences of patching more than 30 customer Exadata systems. Finally, the session will discuss common mistakes made when patching and how to avoid them.
Managing Exadata in the Real World
Exadata is combination of database, server, storage and network all contained inside a single frame. Traditionally these components are managed separately; but since they are all inside a single frame highly interconnected, should it be managed by a single job role - called a Database Machine Administrator (DMA)? The author not only believes so; he also successfully transitioned to that role and built up a team. In this session, you will learn:
• Different components inside Exadata
• What skills are required to manage these components
• Why a single role is better than division of labor like traditional systems
• Percentage of the skills
• How to evolve a traditional job role to this role
• What sort of training is required and its cost
• Other issues - security requirement of division of labor, political issues, etc.
2 Years of Exadata in Production
After running Exadata in production for more than 2 years, Targetbase is sharing their experience with Exadata. Originally purchased in April 2010, Targetbase quickly moved their Exadata racks into production, and hasn't looked back. This session will look into what it means to run Exadata on a day-to-day basis, looking at how to handle mixed workloads on Exadata, and what to do when you can't just drop all of your indexes. Other topics will include a discussion about stability on the platform, how to minimize backup time, and how Targetbase was able to take advantage of the enhancements provided by the Exadata platform.
My Perspective on Exadata
Cary Millsap gives his perspective on Oracle Exadata.
Parallel Query on Exadata
Oracle 11g introduced more new Parallel Query features than any recent release, including :
- Statement queuing
- In-Memory PQ
As well as running 11gR2, Exadata implements a reasonably balanced configuration and helps eliminate the storage bottlenecks that typically inhibited previous customer attempts to use large Degrees of Parallelism. So is Exadata the ideal platform for Oracle Parallelism?
This presentation will discuss which of these features is likely to be most useful in practical real-world situations and some of the trade-off decisions required.
Exadata and OLTP
Exadata is targeted by Oracle at both data warehousing and OLTP. But what can you expect from Exadata in an OLTP environment? What are the strengths and weaknesses? This session focusses on the different layers for data storage with Exadata, how to use them and hat performance to expect.
The Oracle Exadata database machine is positioned as both a resolution for data warehousing and online transaction processing (OLTP). But does Oracle Exadata give you a 10x-plus performance improvement in any case, and specifically with online transaction processing type queries? This session focuses on doing online transaction processing on Oracle Exadata with particular attention on doing logical IO using the Oracle buffer cache and on doing physical IO in which the function of the flash cache is shown. The last part covers writing on the Oracle database machine, which also is important for performing OLTP. So, if you want to understand more of the performance implications of using Oracle Exadata for OLTP like databases, this is the session to go to.
Oracle Exadata Management with Oracle Enterprise Manager
Attend this session to get the inside track on how Oracle Exadata is monitored and managed using Oracle Enterprise Manager. Learn from a real-world implementation on how to effectively instrument and monitor your engineered systems. We will also cover known implementation and configuration issues often encountered when installing and configuring EM for Exadata environments.
IO Resource Management on Exadata
Database consolidation on Exadata is becoming increasingly more common as companies attempt to standardize enterprise Oracle database environments on Exadata. To protect your IO performance investment, it's critical to implement resource management control for your consolidated database landscape. Exadata IO Resource Management, or IORM, is a oft-underlooked, commonly under-appreciated software feature available only on Exadata, and in this presentation I'll present the overall IORM architecture, discuss where IORM fits in to the Exadata software portfolio, and present business justification for why IORM is important. I'll combine an IORM discussion with a database resource management primer and show a real-world IORM/DBRM design and implementation, followed by a collection of database and cell server monitoring techniques to measure and quantify the impact of IORM policies on your Exadata environment.
Due Diligence with Exadata
Over the last 24 years one of the most common patterns of failure I've seen in Oracle-based systems comes from the mismatch between expectation and implementation. It's easy to look at new features and see how, in theory, they offer all sorts of benefits but then find that, in practice, you don't achieve those benefits.
Partitioning is one of the more recent examples. Many people buy into the simple logic that if you cut a big thing into little pieces and only use some of the pieces you're bound to do less work. However, they soon realize they cannot find an appropriate way of cutting big things into little things.
Exadata is the latest option that is "obviously" a good thing in theory. In this presentation we look at the features and benefits it offers and ask the cautionary question: "Is there any reason that it won't work for me?". We'll try to cover all the usual suspects - compression, offload, storage indexes, hardware and licenses and see how often we repeat the phrase: "Yes, but..."
Tuning SQL for Oracle Exadata: The Good, The Bad, and The Ugly
Tuning SQL for the Oracle Exadata Database Machine presents unique challenges that require a shift in both thinking and the approach used to achieve optimal performance. Strategies that work for a non-Exadata platform may not work when moving to Exadata. During this session, hear how a long-time performance consultant adapts tried and true tuning methods to work more effectively on Exadata. Examples of how old, reliable strategies failed (or produced little to no performance improvements) and how new strategies had to be learned in order to successfully tune SQL running on Exadata. I will give you very simple to very complex samples of queries with "before" and "after" elapsed time and resource consumption comparisons to show that you can meet and extend the 10x-plus performance improvements that Oracle says are possible with Exadata.
Hadoop Meets Exadata
Big Data has become the buzzword of 2012. The explosion of machine-generated data is a big driver of the new technologies. And the deluge is just beginning. The Hadoop framework provides the ability to deal with extreme data volumes but has limitations. In many cases, the combination of Hadoop and RDBMS software can work better than either approach alone. Exadata provides a hybrid architecture that sits somewhere between the two by combining a traditional RDBMS approach with distributed storage nodes capable of independent data processing. Making good decisions on where to use each technology appropriately will be key. This presentation will compare and contrast the architectures and show how to use them together.
Exadata and the Oracle Optimizer: The Untold Story
Since its inception in 2008, the Exadata platform has evolved from a balance hardware configuration for Data Warehouse environments to the platform of choice for all database applications. With each new release Exadata has introduced key performance enhancing features such as query offload, storage indexes, flash cache, and Hybrid Columnar Compression. Knowing when and how to take advantage of each of these features can be a daunting task even for the Oracle Optimizer, whose goal has always been to find the optimal execution plan for every SQL statement. This session explains in detail how the Oracle Optimizer costing model has been impacted by the introduction of the performance enhancing feature of the Exadata platform. It will show through the use of real-world examples what you can do to ensure the Optimizer fully understands the capabilities of the platform it is running on without having to mess with initialization parameters or Optimizer hints.
Hybrid Columnar Compression in a non-Exadata System
Although designed as a feature of the 11g database, Hybrid Columnar Compression (HCC) was first released only as one of the key elements of an Exadata appliance. As Oracle expanded into the storage business with the acquisitions of Sun and Pillar, HCC was made available on a standard 126.96.36.199 database when used together with Oracle storage.
In this presentation we will look at the implications of using Hybrid Columnar Compression in a non-Exadata system in combination with the ZFS Storage Appliance. In particular we will discuss the following topics:
• HCC: Do you have a good use case?
• Where HCC can (and can’t) provide real advantages
• ZFS as a Storage Appliance (and as an Oracle appliance)
• Proof of Concept: Putting it all together
A PeopleSoft & OBIEE Consolidation Success Story
In today’s competitive business climate companies are under constant pressure to reduce costs without sacrificing quality. Many companies see database and server consolidation as the key to meeting this goal. Since its introduction, Exadata has become the obvious choice for database and server consolidation projects. It is the next step in the evolutionary process. But managing highly consolidated environments is difficult, especially for mixed workload environments. If not done properly the quality of service suffers. In this session we tell the tale of a large real estate investment company that successfully consolidated their global operations onto a Maximum Availability Architecture Exadata platform. Applications sharing this environment include PeopleSoft Financials, PeopleSoft HR, Portal, and OBIEE. Accurate provisioning and management of system resources was absolutely essential to our success. In this session we share lessons learned and the tools you’ll need to ensure that your consolidation story has a happy ending.
Indexing in Exadata
There's often confusion regarding how indexing requirements may change when moving to Exadata, with some even suggesting that indexes are perhaps no longer required at all. Considering indexes can consume a considerable proportion of total storage within a database and can be crucial to general database performance, care needs to be taken to fully consider indexing requirements when moving to Exadata. This presentation will discuss the indexing structures unique to Exadata, how indexing considerations change (and don't change), how database usage is critical to indexing requirements and how to implement safely an appropriate indexing strategy when migrating to Exadata that will ensure indexes get used when appropriate without compromising Exadata specific features such as Smart Scans and Storage Indexes.
What's under the hood of Exadata X2-2 and X2-8?
Oracle and Intel have had a long relationship working together to optimize Oracle software to run on Intel platforms. With Exadata’s use of Oracle’s Sun Server platforms, Oracle and Intel are working closer to optimize the hardware for the best user performance and reliability. Much work goes into selecting the right Exadata configuration and when it comes to Exadata X2-2 or Exadata X2-8 often the “safe” route of Exadata X2-2 is selected leaving behind the potential benefits of Exadata X2-8 architecture.
During this presentation, learn about:
• What hardware is inside the Exadata
• The difference and benefits of Exadata X2-2 and Exadata X2-8 hardware architectures
• Intel’s platform architecture
• Intel’s server processor roadmap
• Overview of Intel’s cpu architecture
Drilling Deep into Exadata Performance with ASH, SQL Monitoring and ExaSnapper
In this session we will go through multiple case studies of systematically diagnosing the Exadata-specific problems which may reduce your SQL performance and efficiency on the Exadata platform.
We will start from using the standard performance tools such as the SQL Monitoring report and will manually query ASH data for more flexibility. We will then proceed beyond what the classic tools can offer and use Tanel's new Exadata Snapper tool for interpreting the various low-level Exadata metrics that the cell servers send back to the database sessions. You will also see some other scripts and tools that Tanel regularly uses for Exadata performance work.
Exadata Performance Optimization
The Exadata platform is fast, as everyone says, but there’s more to Exadata performance than just the hardware. The “more” part is SQL. The developer / DBA must understand how the Exadata is interpreting and executing SQL in order to get the most out of the platform.
In this session, we will discuss what makes the Exadata different / better / faster and how to exploit its specific features. We’ll review the main feature (Smart Scan, HCC Compression, Parallelism), but then dig deeper into the process of tuning both the Oracle environment as well as specific queries.
You will see why it is possible for a query that runs for 10’s of hours on a traditional system to run in minutes (or better) on the Exadata
Bottlenecks, Bottlenecks, and more Bottlenecks: Lessons Learned from 2 Years of Exadata Benchmarks
Every system has bottlenecks; some in software and some in hardware. When you remove the common hardware bottlenecks, those in software often surface. This session will explore these bottlenecks as observed in two years of customer benchmarks on Exadata. What should you expect when you move your applications to Exadata? Is Exadata really 10x faster? How do you know? What tools can you use to measure performance and locate bottlenecks?