Skip navigation.

Feed aggregator

Why ask how, when why is so much more fun?

Tim Hall - Tue, 2016-01-26 12:43

OK. So the original quote from Spawn is exactly the opposite, but let’s go with it… :)

A few times in the past I’ve been asked questions and started to give a direct answer, then someone smarter has jumped in and asked the killer question. Why? Quite often it’s easy to answer the initial question, so rather than understand the reason for the question, you just respond and pat yourself on the back. That’s great, but without knowing the context of the question, the “right answer” could actually be the “wrong answer”. As Tom always says, “The answer to every question is *it depends*!”

I had another situation like that recently. The questions was, “How can I install VNC on a Linux box?” Pretty simple answer and I know a guy who wrote an article on that, so I pointed them to the article. Job done!

Then I got a pang of guilt and the conversation went like this…

  • Q: Why do you want to install VNC?
  • A: Because my boss told me too.
  • Q: By why does your boss want you to install VNC?
  • A: Because the network connection breaks sometimes, making a “ssh -X user@host” a dodgy solution.

Now I have nothing against VNC itself, but installing it on a server is one more attack vector to worry about, especially if it’s not necessary. Knowing the context allowed me to talk about silent installs, command line DBCA, running things in the background, even the screen command.

If the person goes away and installs VNC, that’s no skin off my nose, but just answering how, without knowing the context could well have opened them, or me, up to criticism down the line.

So next time you answer a question and are about to enable smug mode, ask yourself if you have actually helped, or just taken the easy route.

Cheers

Tim…

Why ask how, when why is so much more fun? was first posted on January 26, 2016 at 7:43 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

FREE Demo Class : Oracle Apps DBA 12.1 & 12.2 : on 29 January (Friday) Join Team K21 Technologies

Online Apps DBA - Tue, 2016-01-26 09:26
This entry is part 7 of 8 in the series Oracle Apps 12.2

 on 29 January (Friday) Join Team K21 Technologies

We’re so glad and excited to share that this year 2016, we got an amazing start-up on learning and noticed that you are also seeking the same. How?

Few Days back we have done our FREE Webinar on Oracle E-Business 12.2 New Features every Apps DBA must know and in that Webinar many of the attendee requested to have a  Free Demo Hands-On class so they can get any idea how we usually present in live class. We thought why not to present live doing some of the activity an Apps DBA would do in 12.2 like installing R12 (12.1 or 12.2) or Start/Stop Services or Managing Application or How new WebLogic Console look like in 12.2 so everyone of you can get a feel of Live Class of K21 Technologies.

Before I jump in to provide you information about our upcoming Free Demo class on 29 January at 10:00 PM IST at 4:30 PM GMT / 12:30 PM EST / 10:00 PM IST for Apps DBA.

I just need a favour from you to put your comment in comment box below or either on registration page what you would like us to cover in class so I can make class according to your requirement and just for you people.

If you wish to register for the Free Demo Hands-on Class for Apps DBA Just click on below button

Click Here to Register For FREE Demo Class

Please notice that we have limited number of seats for a limited time. So, grab it before it goes off !

Don’t forget to share this post if you think this could be useful to others and also Subscribe to this blog for more such FREE Webinars and useful content related to Oracle.

The post FREE Demo Class : Oracle Apps DBA 12.1 & 12.2 : on 29 January (Friday) Join Team K21 Technologies appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Is Oracle Application Express Secure?

Joel Kallman - Tue, 2016-01-26 08:47
Is Oracle Application Express secure?  That's the question I received today, from the customer of a partner.  The customer asked:
"Do you know if Oracle or a third-party has verified how secure APEX is against threats or vulnerabilities? It would be nice to have something published saying how secure APEX is and how it’s never been compromised."Now I imagine smart people like David Litchfield or Pete Finnigan or Alexander Kornbrust would hope that I say something daft here.  But that's not going to happen.  As I replied to the partner:

Sorry, but this doesn't make sense, and for a couple reasons:

  1. There have been published security vulnerabilities in Application Express in the Oracle Critical Patch Update, and they have been fixed in subsequent releases of APEX.  It is incorrect to say that there have never been bugs in APEX itself.  Here's an example:  http://www.oracle.com/technetwork/topics/security/cpujul2015-2367936.html
  2. Secondly, even if APEX never had any security bugs in its existence, if someone built an APEX application which is susceptible to SQL Injection or cross site scripting, does that mean that APEX was compromised?
The request of this customer isn't practical for any piece of software.  If something has never been compromised, does that mean its secure?  If I find no bugs in an application written by your company, does that mean it's bug-free?

I can offer you the following:

  1. APEX 5.0.3 is the most secure version of APEX in our history.
  2. APEX 5.0.3 has more security features than any release of APEX in our history.
  3. We are never permitted to release any version of APEX with known security vulnerabilities, whether they are internally or externally filed.
  4. We routinely scan APEX itself for security vulnerabilities across a variety of threats, and do this for multiple times in a release cycle
  5. Oracle Database Cloud Schema Service runs APEX, and has endured yet another set of multiple rounds of Cloud Security testing.
  6. The Oracle Store runs APEX.
  7. APEX is used in countless military agencies and classified agencies around the globe.
  8. Even inside of Oracle, IT hosts an instance of APEX used by practically every line of business in the company, and it's cleared for the most strict information classification inside of Oracle.
  9. APEX is even used in the security products from Oracle, including Oracle Audit Vault & Database Firewall, Oracle Key Vault and Oracle Real Application Security.
There is security of APEX, and then there is security of the application you've written.  You can assess the security of an application via tools.  One of the best tools on the market is ApexSec from Recx Ltd., which we use internally for APEX applications, is used internally by the security assessment teams at Oracle for other APEX applications, and is used by numerous military and other classified agencies.

Free SQL Webinar by Oracle ACE Kim Berg Hansen

Gerger Consulting - Tue, 2016-01-26 06:55
Join us on February 23 at 14:00 CET (07:00 EST) with our guest host Oracle ACE Kim Berg Hansen presenting “Use Cases of Row Pattern Matching in Oracle 12c”
In this month’s free webinar, you’ll learn how you can use Oracle’s pattern recognition features to gain actionable insights from your organization’s or client’s data.
Click to sign up for the webinar


Let’s hear from Kim why you should attend this webinar:


In recent years, pattern recognition has been a very hot topic in business intelligence. Being able to use SQL for pattern recognition is one of the must-have skills if you are working in the BI field. At ProHuddle, we’ll continue to study pattern recognition with more webinars in the upcoming months.
Categories: Development

Trace file size

Jonathan Lewis - Tue, 2016-01-26 02:30

Here’s a convenient enhancement for tracing that came up on Twitter a few days ago – first in a tweet that I retweeted, then in a question from Christian Antognini based on this bit of the 12c Oracle documentation (opens in separate tab). The question was – does it work for you ?

The new description for max_dump_file_size says that for large enough values Oracle will split the file into multiple chunks of a few megabytes, using a suffix to identify the sequence of the chunks, keeping only the first chunk and the most recent chunks. Unfortunately this doesn’t seem to be true. However, prompted by Chris’ question I ran a quick query against the full parameter list looking for parameters with the word “trace” in their name:


select
        /*+
                leading(nam val val2)
                full(name)
                full(val)  use_hash(val)  no_swap_join_inputs(val)
                full(val2) use_hash(val2) no_swap_join_inputs(val2)
        */
        nam.ksppinm                             name,
        val.ksppstvl                            ses_val,
        val2.ksppstvl                           sys_val,
        nam.ksppdesc                            description,
        nam.indx+1                              numb,
        nam.ksppity                             type,
        val.ksppstdf                            is_def,
        decode(bitand(nam.ksppiflg/256,1),
                1,'True',
                  'False'
        )                                       ses_mod,
        decode(bitand(nam.ksppiflg/65536,3),
                1,'Immediate',
                2,'Deferred' ,
                3,'Immediate',
                  'False'
        )                                       sys_mod,
        decode(bitand(val.ksppstvf,7),
                1,'Modified',
                4,'System Modified',
                  'False'
        )                                       is_mod,
        decode(bitand(val.ksppstvf,2),
                2,'True',
                  'False'
        )                                       is_adj,
        val.ksppstcmnt                          notes
from
        x$ksppi         nam,
        x$ksppcv        val,
        x$ksppsv        val2
where
        nam.indx = val.indx
and     val2.indx = val.indx
and     ksppinm like '%&m_search.%'
order by
        nam.ksppinm
;

Glancing through the result I spotted a couple of interesting parameters with the letters “uts” in their names, so re-ran my query looking for all the “uts” parameters, getting the following (edited) list:


NAME                           SYS_VAL         DESCRIPTION    
------------------------------ --------------- ---------------------------------------------
_diag_uts_control              0               UTS control parameter
_uts_first_segment_retain      TRUE            Should we retain the first trace segment
_uts_first_segment_size        0               Maximum size (in bytes) of first segments 
_uts_trace_disk_threshold      0               Trace disk threshold parameter
_uts_trace_segment_size        0               Maximum size (in bytes) of a trace segment
_uts_trace_segments            5               Maximum number of trace segments 

Note particularly the “first segment size” and “trace segment size” – defaulting to zero (which often means a hidden internal setting, though that doesn’t seem to be the case here, but maybe that’s what the “diag control” is for). I haven’t investigated all the effects, but after a little experimentation I found that all I needed to do to get the behaviour attributed to max_dump_file_size was to set the following two parameters – which I could do at the session level.


alter session set "_uts_first_segment_size" = 5242880;
alter session set "_uts_trace_segment_size" = 5242880;

The minimum value for these parameters is the one I’ve shown above (5120 KB) and with the default value for _uts_trace_segments you will get a maximum of 5 trace files with sequential names like the following:

ls -ltr *4901*.trc

-rw-r----- 1 oracle oinstall 5243099 Jan 26 08:15 orcl_ora_4901_1.trc
-rw-r----- 1 oracle oinstall 5243064 Jan 26 08:15 orcl_ora_4901_12.trc
-rw-r----- 1 oracle oinstall 5243058 Jan 26 08:15 orcl_ora_4901_13.trc
-rw-r----- 1 oracle oinstall 5242993 Jan 26 08:15 orcl_ora_4901_14.trc
-rw-r----- 1 oracle oinstall 1363680 Jan 26 08:15 orcl_ora_4901.trc

As you can see I’m currently generating my 15th trace, and Oracle has kept the first one and the previous three. It’s always working on a file with no suffix to its name but as soon as that file hits its limiting size (plus or minus a few bytes) it gets its appropriate suffix, the oldest file is deleted, and a new trace file without a suffix is started.

Apart from the usual header information the trace files start and end with lines like:

*** TRACE CONTINUED FROM FILE /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4901_11.trc ***
  
*** TRACE SEGMENT RENAMED TO /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4901_12.trc ***

There is one little trap to watch out for: if you set either of these parameters to be larger than max_dump_file_size tracing stops as soon as one of the segments hits the max_dump_file_size and that trace file ends with the usual “overflow” message – e.g, when I changed the max_dump_file_size to 4M in mid-session:

*** DUMP FILE SIZE IS LIMITED TO 4194304 BYTES ***

In my case I had started with max_dump_file_size set to 20M, so I got lucky with my choice of 5M as the segment size.

Further investigation is left as an exercise to the interested reader.

 


Car Logos

iAdvise - Mon, 2016-01-25 21:02
Symbols and elaborate images for car logos can be confusing. So many famous brands use the same animals or intricate images that may seem appealing at first but are actually so similar to each other that you can't tell one company apart from the other unless you're really an expert in the field.
How many auto brands do you know that have used a jungle cat or a horse or a hawk's wings in their trademark?
There're just too many to count.
So how can you create a design for your automobile company that is easy to remember and also sets you apart from the crowd?
Why not use your corporation name in the business mark?
How many of us confuse the Honda trademark with Hyundai's or Mini's with Bentley's?
But that won't happen if your car logos and names are the same.
Remember the Ford and BMW's business image or MG's and Nissan's? The only characteristic that makes them easier to remember is their company name in their brand mark.
Car LogosCar LogosCar LogosCar LogosCar LogosCar LogosCar LogosCar Logos
But it's not really that easy to design a trademark with the corporation name. Since the only things that can make your car brand mark appealing are the fonts and colors, you need to make sure that you use the right ones to make your logo distinct and easy to remember.
What colors to use?
When using the corporation name in trademark, the rule is very simple. Use one solid color for the text and one solid color for the background. Text in silver color with a red or a dark blue background looks appealing but you can experiment with different colors as well. You can also use white colored text on a dark green background which will make your design identifiable from afar. Don't be afraid to use bright colored background but make sure you use the text color that complements the background instead of contrasting with it.
What kind of fonts to use?
Straight and big fonts may be easier to read from a distance but the font style that looks intricate and appealing to the customers and give your design a classic look are the curvier fonts. But make sure that the text is not too curvy that it loses its readability. You can even use the Times Roman font in italic effect or use some other professional font style with curvy effect to make sure that the text is readable and rounded at the same time.
Remember the ford logo? It may just be white text on a red background, but it's the curvy font style that sets it apart from the rest. Remember the Ford business mark or the smart car logo?
What shapes to use?
The vehicle business image has to be enclosed in a shape, of course. The shape that is most commonly used is a circle. You can use an oval, a loose square or even the superman diamond shape to enclose your design. But make sure that your chosen shape does not have too many sides that make the mark complicated.
The whole idea of a car corporation mark is to make it easily memorable and recognizable along with making it a classic. Using the above mentioned ideas can certainly do that for your trademark.
Beverly Houston works as a Senior Design Consultant at a Professional Logo Design Company. For more information on car logos and names find her competitive rates at Logo Design Consultant.
Categories: APPS Blogs

Oracle Midlands : Event #13

Tim Hall - Mon, 2016-01-25 16:41

Tomorrow is Oracle Midlands Event #13.

om13

Franck is a super-smart guy, so please show your support and start the year as you mean to go on!

Cheers

Tim…

Oracle Midlands : Event #13 was first posted on January 25, 2016 at 11:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Hybrid Databases: Key to Your Success

Pythian Group - Mon, 2016-01-25 14:51

Almost every company’s success is contingent upon its ability to effectively turn data into knowledge. Data, combined with the power of analytics, brings new, previously unavailable business insights that help companies truly transform the way they do business—allowing them to make more informed decisions, improve customer engagement, and predict trends.

Companies can only achieve these insights if they have a holistic view of their data. Unfortunately, most companies have hybrid data—different types of data, living in different systems that don’t talk to each other. What companies need is a way to consolidate the data they currently have while allowing for the integration of new data.

The Hybrid Data Challenge

There are two primary reasons why data starts life in separate systems. The first is team segregation. Each team in an organization builds its systems independently of one another based on its unique objectives and data requirements. This, in part, is also the second reason. Teams often choose a database system that specializes in handling the specific application tasks. Sometimes they need to prioritize for scalability and performance. Other times, flexibility and reliability are more important.

Businesses need to evolve their data strategy to establish and create a “data singularity,” which summarizes the total of their information assets as a competitive, strategic advantage and enables real-time knowledge. A key objective of this strategy should be to reduce data movement, especially during processing and while executing queries, because both are very expensive operations. As well, data movement, or data replication in general, is a very fragile operation, often requiring significant support resources to run smoothly.

Enter The World Of Hybrid Databases

A hybrid database is a flexible, high-performing data store that can store and access multiple, different data types, including unstructured (pictures, videos, free text) and semi-structured data (XML, JSON). It can locate individual records quickly, handle both analytical and transactional workloads simultaneously, and perform analytical queries at scale. Analytical queries can be quite resource-intensive so a hybrid database needs to be able to scale out and scale linearly. It must also be highly available and include remote replication capabilities to ensure the data is always accessible.

In the past, many organizations consolidated their data in a data warehouse. Data warehouses gave us the ability to access all of our data, and while these systems were highly optimized for analytical long-running queries, they were strictly batch-oriented with weekly or nightly loads. Today, we demand results in real time and query times in milliseconds.

When Cassandra and Hadoop first entered the market, in 2008 and 2011, respectively, they addressed the scalability limitations of traditional relational database systems and data warehouses, but they had restricted functionality. Hadoop offered infinite linear scalability for storing data, but no support for SQL or any kind of defined data structure. Cassandra, a popular NoSQL option, supported semi-structured, distributed document formats, but couldn’t do analytics. They both required significant integration efforts to get the results organizations were looking for.

In 2008, Oracle introduced the first “mixed use” database appliance—Exadata. Oracle Exadata brought a very high performant reference hardware configuration, an engineered system, as well as unique features in query performance and data compression, on top of an already excellent transactional processing system, with full SQL support, document-style datatypes, spatial and graph extensions, and many other features.

In recent years, vendors have started more aggressively pursuing the hybrid market and existing products have started emerging that cross boundaries. We can now run Hadoop-style MapReduce jobs on Cassandra data, and SQL on Hadoop via Impala. Microsoft SQL Server introduced Columnar Format storage for analytical data sets, and In-Memory OLTP—a high performance transactional system, with log-based disk storage. Oracle also introduced improvements to their product line, with Oracle In-Memory, a data warehouse with a specific high-performance memory store, bringing extreme analytical performance.

Choosing A Hybrid Database Solution

Rearchitecting your data infrastructure and choosing the right platform can be complicated—and expensive. To make informed decisions, start with your end goals in mind, know what types of data you have, and ensure you have a scalable solution to meet your growing and changing organizational and business needs—all while maintaining security, accessibility, flexibility, and interoperability with your existing infrastructure. A key design principle is to reduce your data movement and keep your architecture simple. For example, a solution that relies on data being in Hadoop but performs analytics in another database engine is not optimal because large amounts of data must be copied between the two systems during query execution—a very inefficient and compute-intensive process.

Have questions? Contact Pythian. We can help analyze your business data requirements, recommend solutions, and create a roadmap for your data strategy, leveraging either hybrid or special purpose databases.

Categories: DBA Blogs

February 17, 2016: Baxters Food Group―Oracle EPM Cloud Customer Reference Forum

Linda Fishman Hoyle - Mon, 2016-01-25 11:58

Join us for another Oracle Customer Reference Forum on February 16, 2016 at 5:00 p.m. GMT/ 9:00 a.m. PT. Elaine McKechnie, Group Head of MIS at Baxters Food Group, will talk about why the company chose to implement Oracle EPM Cloud, Oracle ERP, and Oracle Business Intelligence. Ms. McKechnie will share Baxter's selection process for its new Oracle EPM Cloud solution, its phased implementation approach, and the expected benefits.

Baxters is a Scottish food processing company with 1000+ employees.

Register now to attend the live forum on Wednesday, February 17, 2016 at 5:00 p.m. GMT / 9:00 a.m. PT and learn more about Baxters Food Group experience with Oracle EPM Cloud.

Advisor Webcast:WebCenter Content 12c, Documents Cloud Service and Hybrid ECM

WebCenter Team - Mon, 2016-01-25 08:49

ADVISOR WEBCAST: WebCenter Content 12c, Documents Cloud Service and Hybrid ECM

Schedule:

  • Wednesday, February 17, 2016 08:00 AM (US Pacific Time)
  • Wednesday, February 17, 2016 11:00 AM (US Eastern Time)
  • Wednesday, February 17, 2016 05:00 PM (Central European Time)
  • Wednesday, February 17, 2016 09:30 PM (India Standard Time)

This one hour session is recommended for technical and functional users who use WebCenter Content. This session will focus on new release information including WebCenter Content 12c, Documents Cloud Service and Hybrid ECM. We will also briefly discuss the future roadmap for WebCenter Content.

Topics Include:
Overview of new features of WebCenter Content 12c

  • WebCenter Imaging embedded
  • Digital Asset Management improvements
  • Content UI improvements
  • Desktop Integration Suite improvements

Overview of Documents Cloud Service

  • File Access/Sync
  • Application Integration
  • Document Collaboration
  • Security

Overview of Hybrid ECM

  • Integration with on-premise WebCenter Content
  • External Collaboration

Duration: 1 hr

Current Schedule and Archived Downloads can be found in Note 740966.1

WebEx Conference Details

Topic: WebCenter Content 12c, Documents Cloud Service and Hybrid ECM
Event Number: 599 080 471
Event Passcode: 909090

Register for this Advisor Webcast: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=599080471&t=a

Once the host approves your request, you will receive a confirmation email with instructions for joining the meeting.

InterCall Audio Instructions

A list of Toll-Free Numbers can be found below.

  • Participant US/Canada Dial-in #: 1866 230 1938   
  • International Toll-Free Numbers
  • Alternate International Dial-In #: +44 (0) 1452 562 665
  • Conference ID: 25721968

VOICESTREAMING AVAILABLE

Kafka and more

DBMS2 - Mon, 2016-01-25 05:28

In a companion introduction to Kafka post, I observed that Kafka at its core is remarkably simple. Confluent offers a marchitecture diagram that illustrates what else is on offer, about which I’ll note:

  • The red boxes — “Ops Dashboard” and “Data Flow Audit” — are the initial closed-source part. No surprise that they sound like management tools; that’s the traditional place for closed source add-ons to start.
  • “Schema Management”
    • Is used to define fields and so on.
    • Is not equivalent to what is ordinarily meant by schema validation, in that …
    • … it allows schemas to change, but puts constraints on which changes are allowed.
    • Is done in plug-ins that live with the producer or consumer of data.
    • Is based on the Hadoop-oriented file format Avro.

Kafka offers little in the way of analytic data transformation and the like. Hence, it’s commonly used with companion products. 

  • Per Confluent/Kafka honcho Jay Kreps, the companion is generally Spark Streaming, Storm or Samza, in declining order of popularity, with Samza running a distant third.
  • Jay estimates that there’s such a companion product at around 50% of Kafka installations.
  • Conversely, Jay estimates that around 80% of Spark Streaming, Storm or Samza users also use Kafka. On the one hand, that sounds high to me; on the other, I can’t quickly name a counterexample, unless Storm originator Twitter is one such.
  • Jay’s views on the Storm/Spark comparison include:
    • Storm is more mature than Spark Streaming, which makes sense given their histories.
    • Storm’s distributed processing capabilities are more questionable than Spark Streaming’s.
    • Spark Streaming is generally used by folks in the heavily overlapping categories of:
      • Spark users.
      • Analytics types.
      • People who need to share stuff between the batch and stream processing worlds.
    • Storm is generally used by people coding up more operational apps.

If we recognize that Jay’s interests are obviously streaming-centric, this distinction maps pretty well to the three use cases Cloudera recently called out.

Complicating this discussion further is Confluent 2.1, which is expected late this quarter. Confluent 2.1 will include, among other things, a stream processing layer that works differently from any of the alternatives I cited, in that:

  • It’s a library running in client applications that can interrogate the core Kafka server, rather than …
  • … a separate thing running on a separate cluster.

The library will do joins, aggregations and so on, and while relying on core Kafka for information about process health and the like. Jay sees this as more of a competitor to Storm in operational use cases than to Spark Streaming in analytic ones.

We didn’t discuss other Confluent 2.1 features much, and frankly they all sounded to me like items from the “You mean you didn’t have that already??” list any young product has.

Related links

Categories: Other

Kafka and Confluent

DBMS2 - Mon, 2016-01-25 05:27

For starters:

  • Kafka has gotten considerable attention and adoption in streaming.
  • Kafka is open source, out of LinkedIn.
  • Folks who built it there, led by Jay Kreps, now have a company called Confluent.
  • Confluent seems to be pursuing a fairly standard open source business model around Kafka.
  • Confluent seems to be in the low to mid teens in paying customers.
  • Confluent believes 1000s of Kafka clusters are in production.
  • Confluent reports 40 employees and $31 million raised.

At its core Kafka is very simple:

  • Kafka accepts streams of data in substantially any format, and then streams the data back out, potentially in a highly parallel way.
  • Any producer or consumer of data can connect to Kafka, via what can reasonably be called a publish/subscribe model.
  • Kafka handles various issues of scaling, load balancing, fault tolerance and so on.

So it seems fair to say:

  • Kafka offers the benefits of hub vs. point-to-point connectivity.
  • Kafka acts like a kind of switch, in the telecom sense. (However, this is probably not a very useful metaphor in practice.)

Jay also views Kafka as something like a file system. Kafka doesn’t actually have a file-system-like interface for managing streams, but he acknowledges that as a need and presumably a roadmap item.

The most noteworthy technical point for me was that Kafka persists data, for reasons of buffering, fault-tolerance and the like. The duration of the persistence is configurable, and can be different for different feeds, with two main options:

  • Guaranteed to have the last update of anything.
  • Complete for the past N days.

Jay thinks this is a major difference vs. messaging systems that have come before. As you might expect, given that data arrives in timestamp order and then hangs around for a while:

  • Kafka can offer strong guarantees of delivering data in the correct order.
  • Persisted data is automagically broken up into partitions.

Technical tidbits include:

  • Data is generally fresh to within 1.5 milliseconds.
  • 100s of MB/sec/server is claimed. I didn’t ask how big a server was.
  • LinkedIn runs >1 trillion messages/day through Kafka.
  • Others in that throughput range include but are not limited to Microsoft and Netflix.
  • A message is commonly 1 KB or less.
  • At a guesstimate, 50%ish of messages are in Avro. JSON is another frequent format.

Jay’s answer to any concern about performance overhead for current or future features is usually to point out that anything other than the most basic functionality:

  • Runs in different processes from core Kafka …
  • … if it doesn’t actually run on a different cluster.

For example, connectors have their own pools of processes.

I asked the natural open source question about who contributes what to the Apache Kafka project. Jay’s quick answers were:

  • Perhaps 80% of Kafka code comes from Confluent.
  • LinkedIn has contributed most of the rest.
  • However, as is typical in open source, the general community has contributed some connectors.
  • The general community also contributes “esoteric” bug fixes, which Jay regards as evidence that Kafka is in demanding production use.

Jay has a rather erudite and wry approach to naming and so on.

  • Kafka got its name because it was replacing something he regarded as Kafkaesque. OK.
  • Samza is an associated project that has something to do with transformations. Good name. (The central character of The Metamorphosis was Gregor Samsa, and the opening sentence of the story mentions a transformation.)
  • In his short book about logs, Jay has a picture caption “ETL in Ancient Greece. Not much has changed.” The picture appears to be of Sisyphus. I love it.
  • I still don’t know why he named a key-value store Voldemort. Perhaps it was something not to be spoken of.

What he and his team do not yet have is a clear name for their product category. Difficulties in naming include:

Confluent seems to be using “stream data platform” as a placeholder. As per the link above, I once suggested Data Stream Management System, or more concisely Datastream Manager. “Event”, “event stream” or “event series” could perhaps be mixed in as well. I don’t really have an opinion yet, and probably won’t until I’ve studied the space in a little more detail.

And on that note, I’ll end this post for reasons of length, and discuss Kafka-related technology separately.

Related links

Categories: Other

ServletContextAware Controller class with Spring

Pas Apicella - Mon, 2016-01-25 03:55
I rarely need to save state within the Servlet Context via an application scope, but recently I did and here is what your controller class would look like to get access to the ServletConext with Spring. I was using Spring Boot 1.3.2.RELEASE.

In short you implement the "org.springframework.web.context.ServletContextAware" interface as shown below. In this example we retrieve an application scope attribute.
  
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.json.JsonParser;
import org.springframework.boot.json.JsonParserFactory;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.context.ServletContextAware;

import javax.servlet.ServletContext;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

@Controller
public class CommentatorController implements ServletContextAware
{
private static final Logger log = LoggerFactory.getLogger(CommentatorController.class);
private static final JsonParser parser = JsonParserFactory.getJsonParser();

private ServletContext context;

public void setServletContext(ServletContext servletContext)
{
this.context = servletContext;
}

@RequestMapping(value="/", method = RequestMethod.GET)
public String listTeams (Model model)
{
String jsonString = (String) context.getAttribute("riderData");
List<Rider> riders = new ArrayList<>();

if (jsonString != null)
{
if (jsonString.trim().length() != 0)
{
Map<String, Object> jsonMap = parser.parseMap(jsonString);
List<Object> riderList = (List<Object>) jsonMap.get("Riders");

for (Object rider: riderList)
{
Map m = (Map) rider;
riders.add(
new Rider((String)m.get("RiderId"),
(String)m.get("Cadence"),
(String)m.get("Speed"),
(String)m.get("HeartRate")));
}

//log.info("Riders = " + riders.size());
model.addAttribute("ridercount", riders.size());
}
}
else
{
model.addAttribute("ridercount", 0);
}

model.addAttribute("riders", riders);

return "commentator";

}
}
http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

SafeDrop – Part 2: Function and Benefits

Oracle AppsLab - Mon, 2016-01-25 02:14

SafeDrop is a secure box for receiving a physical package delivery, without the need of recipient to be present. If you recall, it was my team’s project at the AT&T Developer Summit Hackathon earlier this month.

SafeDrop is implemented with Intel Edison board at its core, coordinating various peripheral devices to produce a secure receiving product, and it won second place in the “Best Use of Intel Edison” at the hackathon.

SafeDrop box with scanner

SafeDrop box with scanner

Components built in the SafeDrop

Components built in the SafeDrop

While many companies have focused on the online security of eCommerce, the current package delivery at the last mile is still very much insecure. ECommerce is ubiquitous, and somehow people need to receive the physical goods.

The delivery company would tell you the order will be delivered on a particular day. You can wait at home all day to receive the package, or let the package sit in front of your house and risk someone stealing it.

Every year there are reports of package theft during holiday season, but more importantly, the inconvenience of staying home to accept goods and the lack of peace of mind, really annoys many people.

With SafeDrop, your package is delivered, securely!

How SafeDrop works:

1. When a recipient is notified a package delivery with a tracking number, he puts that tracking number in a mobile app, which essentially registers it to SafeDrop box that it is expecting a package with that tracking number.

2. When delivery person arrives, he just scans the tracking number barcode on the SafeDrop box, and that barcode is the unique key to open the SafeDrop. Once the tracking number is verified, the SafeDrop will open.

Delivery person scans the package in front of SafeDrop

Delivery person scans the package in front of SafeDrop

3. When the SafeDrop is opened, a video is recorded for the entire duration when the door is open, as a security measure. If the SafeDrop is not closed, a loud buzz sound continues until it is closed properly.

 Intel Edison board, sensor, buzzer, LED, servo, and webcam

Inside of SafeDrop: Intel Edison board, sensor, buzzer, LED, servo, and webcam

4. Once the package is within SafeDrop, a notification is sent to recipient’s mobile app, indicating the expected package has been delivered. Also shows a link to the video recorded.

In the future, SafeDrop could be integrated with USPS, UPS and FedEx, to verify the package tracking number for the SafeDrop automatically. When delivery person scans tracking number on SafeDrop, it automatically update status to “delivered” into that tracking record in delivery company’s database too. That way, the entire delivery process is automated in a secure fashion.

This SafeDrop design highlights three advantages:

1. Tracking number barcode as the key to the SafeDrop.
That barcode is tracked during the entire delivery, and it is with the package always, and it is fitting to use that as “key” to open its destination. We do not introduce any “new” or “additional” thing for the process.

2. Accurate delivery, which eliminates human mistakes.
Human error sometimes causes deliveries to the wrong address. With SafeDrop integrated into the shipping system, the focus is on a package (with tracking number as package ID) going to a target (SafeDrop ID associated with an address).

In a sense, the package (package ID) has its intended target (SafeDrop ID). The package can only be deposited into one and only one SafeDrop, which eliminates the wrong delivery issue.

3. Non-disputable delivery.
This dispute could happen: delivery company says a package has been delivered, but the recipient says that it has not arrived. The possible reasons: a) delivery person didn’t really deliver it; b) delivery person dropped it to a wrong address; c) a thief came by and took it; d) the recipient got it but was making a false claim.

SafeDrop makes things clear! If it is really delivered to SafeDrop, it is being recorded, and delivery company has done its job. If it is in the SafeDrop, the recipient has it. Really there is no dispute.

I will be showing SafeDrop at Modern Supply Chain Experience in San Jose, January 26 and 27 in the Maker Zone in the Solutions Showcase. If you’re attending the show, come by and check out SafeDrop.Possibly Related Posts:

Links for 2016-01-24 [del.icio.us]

Categories: DBA Blogs

OBIEE12c – Three Months In, What’s the Verdict?

Rittman Mead Consulting - Sun, 2016-01-24 13:22

I’m over in Redwood Shores, California this week for the BIWA Summit 2016 conference where I’ll be speaking about BI development on Oracle’s database and big data platforms. As it’s around three months since OBIEE12c came out and we were here for Openworld, I thought it’d be a good opportunity to reflect on how OBIEE12c has been received by ourselves, the partner community and of course by customers. Given OBIEE11g was with us for around five years it’s clearly still early days in the overall lifecycle of this release, but it’s also interesting to compare back to where we were around the same time with 11g and see if we can spot any similarities and differences to take-up then.

Starting with the positives; Visual Analyzer (note – not the Data Visualization option, I’ll get to that later) has gone down well with customers, at least over in the UK. The major attraction for customers seems to be “Tableau with a shared governed data model and integrated with the rest of our BI platform” (see Oracle’s slide from Oracle Openworld 2015 below), and seeing as the Data Visualization Option is around the same price as Tableau Server the synergies around not having to deploy and support a new platform seem to outweigh the best-of-breed query functionality of Tableau and Qlikview.

NewImage

Given that VA is an extra-cost option what I’m seeing is customers planning to upgrade their base OBIEE platform from 11g to 12c as part of their regular platform refresh, and considering the VA part after the main upgrade as part of a separate cost/benefit exercise. VA however seems to be the trigger for customer interest in an upgrade, with the business typically now holding the budget for BI rather than IT and Visual Analyzer (like Mobile with 11g) being the new capability that unlocks the upgrade spend.

On the negative side, Oracle charging for VA hasn’t gone down well, either from the customer side who ask what it is they actually get for their 22% upgrade and maintenance fee if they they have to pay for anything new that comes with the upgrade; or from partners who now see little in the 12c release to incentivise customers to upgrade that’s not an additional cost option. My response is usually to point to previous releases – 11g with Scorecards and Mobile, the database with In-Memory, RAC and so on – and say that it’s always the case that anything net-new comes at extra cost, whereas the upgrade should be something you do anyway to keep the platform up-to-date and be prepared to uptake new features. My observation over the past month or so is that this objection seems to be going away as people get used to the fact that VA costs extra; the other push-back I get a lot is from IT who don’t want to introduce data mashups into their environment, partly I guess out of fear of the unknown but also partly because of concerns around governance, how well it’ll work in the real world, so on and so on. I’d say overall VA has gone down well at least once we got past the “shock” of it costing extra, I’d expect there’ll be some bundle along the lines of BI Foundation Suite (BI Foundation Suite Plus?) in the next year or so that’ll bundle BIF with the DV option, maybe include some upcoming features in 12c that aren’t exposed yet but might round out the release. We’ll see.

The other area where OBIEE12c has gone down well, surprisingly well, is with the IT department for the new back-end features. I’ve been telling people that whilst everyone thinks 12c is about the new front-end features (VA, new look-and-feel etc) it’s actually the back-end that has the most changes, and will lead to the most financial and cost-saving benefits to customers – again note the slide below from last year’s Openworld summarising these improvements.

NewImage

Simplifying install, cloning, dev-to-test and so on will make BI provisioning considerably faster and cheaper to do, whilst the introduction of new concepts such as BI Modules, Service Instances, layered RPD customizations and BAR files paves the way for private cloud-style hosting of multiple BI applications on a single OBIEE12c domain, hybrid cloud deployments and mass customisation of hosted BI environments similar to what we’ve seen with Oracle Database over the past few years.

What’s interesting with 12c at this point though is that these back-end features are only half-deployed within the platform; the lack of a proper RPD upload tool, BI Modules and Services Instances only being in the singular and so on point to a future release where all this functionality gets rounded-off and fully realised in the platform, so where we are now is that 12c seems oddly half-finished and over-complicated for what it is, but it’s what’s coming over the rest of the lifecycle that will make this part of the product most interesting – see the slide below from Openworld 2014 where this full vision was set-out, but in Openworld this year was presumably left-out of the launch slides as the initial release only included the foundation and not the full capability.

NewImage

Compared back to where we were with OBIEE11g (11.1.1.3, at the start of the product cycle) which was largely feature-complete but had significant product quality issues, with 12c we’ve got less of the platform built-out but (with a couple of notable exceptions) generally good product quality, but this half-completed nature of the back-end must confuse some customers and partners who aren’t really aware of the full vision for the platform.

And finally, cloud; BICS had an update some while ago where it gained Visual Analyzer and data mashups earlier than the on-premise release, and as I covered in my recent UKOUG Tech’15 conference presentation it’s now possible to upload an on-premise RPD (but not the accompanying catalog, yet) and run it in BICS, giving you the benefit of immediate availability of VA and data mashups without having to do a full platform upgrade to 12c.

NewImage

In-practice there are still some significant blockers for customers looking to move their BI platform wholesale into Oracle Cloud; there’s no ability yet to link your on-premise Active Directory or other security setup to BICS meaning that you need to recreate all your users as Oracle Cloud users, and there’s very limited support for multiple subject areas, access to on-premise data sources and other more “enterprise” characterises of an Oracle BI platform. And Data Visualisation Cloud Service (DVCS) has so far been a non-event; for partners the question is why would we get involved and sell this given the very low cost and the lack of any area we can really add value, while for customers it’s perceived as interesting but too limited to be of any real practical use. Of course, over the long term this is the future – I expect on-premise installs of OBIEE will be the exception rather than the rule in 5 or 10 years time – but for now Cloud is more “one to monitor for the future” rather than something to plan for now, as we’re doing with 12c upgrades and new implementations.

So in summary, I’d say with OBIEE12c we were pleased and surprised to see it out so early, and VA in-particular has driven a lot of interest and awareness from customers that has manifested itself in enquires around upgrades and new features presentations. The back-end for me is the most interesting new part of the release, promising significant time-saving and quality-improving benefits for the IT department, but at present these benefits are more theoretical than realisable until such time as the full BI Modules/multiple Services Instances feature is rolled-out later this year or next. Cloud is still “one for the future” but there’s significant interest from customers in moving either part or all of their BI platform to the cloud, but given the enterprise nature of OBIEE it’s likely BI will follow after a wider adoption of Oracle Cloud for the customer rather than being the trailblazer given the need to integrate with cloud security, data sources and the need to wait for some other product enhancements to match on-premise functionality.

The post OBIEE12c – Three Months In, What’s the Verdict? appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Semijoin_driver

Jonathan Lewis - Sun, 2016-01-24 05:42

Here’s one of those odd little tricks that (a) may help in a couple of very special cases and (b) may show up at some future date – or maybe it already does – in the optimizer if it is recognised as a solution to a more popular problem. It’s about an apparent restriction on how the optimizer uses the BITMAP MERGE operation, and to demonstrate a very simple case I’ll start with a data set with just one bitmap index:


create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id,
        mod(rownum,1000)        n1,
        rpad('x',10,'x')        small_vc,
        rpad('x',100,'x')       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
;
begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

create bitmap index t1_b1 on t1(n1);

select index_name, leaf_blocks, num_rows from user_indexes;

/*
INDEX_NAME           LEAF_BLOCKS   NUM_ROWS
-------------------- ----------- ----------
T1_B1                        500       1000
*/

Realistically we don’t expect to use a single bitmap index to access data from a large table, usually we expect to have queries that give the optimizer the option to choose and combine several bitmap indexes (possibly driving through dimension tables first) to reduce the target row set in the table to a cost-effective level.

In this example, though, I’ve created a column data set that many people might view as “inappropriate” as the target for a bitmap index – in one million rows I have one thousand distinct values, it’s not a “low cardinality” column – but, as Richard Foote (among others) has often had to point out, it’s wrong to think that bitmap indexes are only suitable for columns with a very small number of distinct values. Moreover, it’s the only index on the table, so no chance of combining bitmaps.

Another thing to notice about my data set is that the n1 column has been generated by the mod() function; because of this the column cycles through the 1,000 values I’ve allowed for it, and this means that the rows for any given value are scattered widely across the table, but it also means that if I find a row with the value X in it then there could well be a row with the value X+4 (say) in the same block.

I’ve reported the statistics from user_indexes at the end of the sample code. This shows you that the index holds 1,000 “rows” – i.e. each key value requires only one bitmap entry to cover the whole table, with two rows per leaf block.  (By comparison, a B-tree index oon the column was 2,077 leaf block uncompressed, or 1,538 leaf blocks when compressed).

So here’s the query I want to play with, followed by the run-time execution plan with stats (in this case from a 12.1.0.2 instance):


alter session set statistics_level = all;

select
        /*+
                qb_name(main)
        */
        max(small_vc)
from
        t1
where
        n1 in (1,5)
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |      1 |        |      1 |00:00:00.03 |    2006 |      4 |
|   1 |  SORT AGGREGATE                       |       |      1 |      1 |      1 |00:00:00.03 |    2006 |      4 |
|   2 |   INLIST ITERATOR                     |       |      1 |        |   2000 |00:00:00.03 |    2006 |      4 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      2 |   2000 |   2000 |00:00:00.02 |    2006 |      4 |
|   4 |     BITMAP CONVERSION TO ROWIDS       |       |      2 |        |   2000 |00:00:00.01 |       6 |      4 |
|*  5 |      BITMAP INDEX SINGLE VALUE        | T1_B1 |      2 |        |      2 |00:00:00.01 |       6 |      4 |
------------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access(("N1"=1 OR "N1"=5))

The query is selecting 2,000 rows from the table, for n1 = 1 and n1 = 5, and the plan shows us that the optimizer probes the bitmap index twice (operation 5), once for each value, fetching all the rows for n1 = 1, then fetching all the rows for n1 = 5. This entails 2,000 buffer gets. However, we know that for every row where n1 = 1 there is another row nearby (probably in the same block) where n1 = 5 – it would be nice if we could pick up the 1 and the 5 at the same time and do less work.

Technically the optimizer has the necessary facility to do this – it’s known as the BITMAP MERGE – Oracle can read two or more entries from a bitmap index, superimpose the bits (effectively a BITMAP OR), then convert to rowids and visit the table. Unfortunately there are cases (and it seems to be only the simple cases) where this doesn’t appear to be allowed even when we – the users – can see that it might be a very effective strategy. So can we make it happen – and since I’ve asked the question you know that the answer is almost sure to be yes.

Here’s an alternate (messier) SQL statement that achieves the same result:


select
        /*+
                qb_name(main)
                semijoin_driver(@subq)
        */
        max(small_vc)
from
        t1
where
        n1 in (
                select /*+ qb_name(subq) */
                        *
                from    (
                        select 1 from dual
                        union all
                        select 5 from dual
                        )
        )
;

-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |      1 |00:00:00.02 |    1074 |       |       |          |
|   1 |  SORT AGGREGATE                      |       |      1 |      1 |      1 |00:00:00.02 |    1074 |       |       |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |   2000 |   2000 |00:00:00.02 |    1074 |       |       |          |
|   3 |    BITMAP CONVERSION TO ROWIDS       |       |      1 |        |   2000 |00:00:00.01 |       6 |       |       |          |
|   4 |     BITMAP MERGE                     |       |      1 |        |      1 |00:00:00.01 |       6 |  1024K|   512K| 8192  (0)|
|   5 |      BITMAP KEY ITERATION            |       |      1 |        |      2 |00:00:00.01 |       6 |       |       |          |
|   6 |       VIEW                           |       |      1 |      2 |      2 |00:00:00.01 |       0 |       |       |          |
|   7 |        UNION-ALL                     |       |      1 |        |      2 |00:00:00.01 |       0 |       |       |          |
|   8 |         FAST DUAL                    |       |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|   9 |         FAST DUAL                    |       |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|* 10 |       BITMAP INDEX RANGE SCAN        | T1_B1 |      2 |        |      2 |00:00:00.01 |       6 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
  10 - access("N1"="from$_subquery$_002"."1")

Key points from this plan – and I’ll comment on the SQL in a moment: The number of buffer visits is roughly halved (In many cases we picked up two rows as we visited each buffer); operation 4 shows us that we did a BITMAP MERGE, and we can see in operations 5 to 10 that we did a BITMAP KEY ITERATION (which is a bit like a nested loop join – “for each row returned by child 1 (operation 6) we executed child 2 (operation 10)”) to probe the index twice and get two strings of bits that operation 4 could merge before operation 3 converted to rowids.

For a clearer picture of how we visit the table, here are the first few rows and last few rows from a version of the two queries where we simply select the ID column rather than aggregating on the small_vc column:

select  id from ...

Original query structure
         1
      1001
      2001
      3001
...
    997005
    998005
    999005

2000 rows selected.

Modified query structure:

         1
         5
      1001
      1005
      2001
      2005
...
    998001
    998005
    999001
    999005
    
2000 rows selected.

As you can see, one query returns all the n1 = 1 rows then all the n1 = 5 rows while the other query alternates as it walks through the merged bitmap. You may recall the Exadata indexing problem (now addressed, of course) from a few years back where the order in which rows were visited after a (B-tree) index range scan made a big difference to performance. This is the same type of issue – when the optimizer’s default plan gets the right data in the wrong order we may be able to find ways of modifying the SQL to visit the data in a more efficient order. In this case we save only fractions of a second because all the data is buffered, but it’s possible that in a production environment with much larger tables many, or all, of the re-visits could turn into physical reads.

Coming back to the SQL, the key to the re-write is to turn my IN-list into a subquery, and then tell the optimizer to use that subquery as a “semijoin driver”. This is essentially the mechanism used by the Star Tranformation, where the optimizer rewrites a simple join so that each dimension table (typically) appears twice, first as an IN subquery driving the bitmap selection then as a “joinback”. But (according to the manuals) a star transformation requires at least two dimension tables to be involved in a join to the central fact table – and that may be why the semi-join approach is not considered in this (and slightly more complex) cases.

 

 

My reference: bitmap_merge.sql, star_hack3.sql


Trace Files -- 10c : Query and DML (INSERT)

Hemant K Chitale - Sun, 2016-01-24 04:52
In the previous posts, I have traced either
SELECT
or
INSERT or UPDATE or DELETE
statements

I have pointed out that the block statistics are reported as "FETCH" statistics for SELECTs and "EXECUTE" statistics for the DMLs

What if we have an INSERT ... AS SELECT ?

SQL ID: 5pf0puy1pmcvc Plan Hash: 2393429040

insert into all_objects_many_list select * from all_objects_short_list


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.73 16.19 166 1655 14104 28114
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.74 16.21 166 1655 14104 28114

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=1766 pr=166 pw=0 time=16204380 us)
28114 28114 28114 TABLE ACCESS FULL ALL_OBJECTS_SHORT_LIST (cr=579 pr=0 pw=0 time=127463 us cost=158 size=2614881 card=28117)


insert into all_objects_many_list
select * from all_objects_short_list
where rownum < 11

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 6 11 10
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 6 11 10

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=6 pr=0 pw=0 time=494 us)
10 10 10 COUNT STOPKEY (cr=3 pr=0 pw=0 time=167 us)
10 10 10 TABLE ACCESS FULL ALL_OBJECTS_SHORT_LIST (cr=3 pr=0 pw=0 time=83 us cost=158 size=2614881 card=28117)


Since it is an INSERT statement, the block statistics are reported against EXECUTE (nothing reported against the FETCH), even though the Row Source Operations section shows use Table Access (i.e. SELECT) against the ALL_OBJECTS_SHORT_LIST table. Also note, as we have seen in the previous trace on INSERTs, the target table does not get reported in the Row Source Operations.

Here's another example.

SQL ID: 5fgnrpgk3uumc Plan Hash: 2703984749

insert into all_objects_many_list select * from dba_objects


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.25 0 0 0 0
Execute 1 0.63 3.94 82 2622 13386 28125
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.67 4.20 82 2622 13386 28125

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=2736 pr=82 pw=0 time=3954384 us)
28125 28125 28125 VIEW DBA_OBJECTS (cr=360 pr=2 pw=0 time=1898588 us cost=105 size=5820219 card=28117)
28125 28125 28125 UNION-ALL (cr=360 pr=2 pw=0 time=1735711 us)
0 0 0 TABLE ACCESS BY INDEX ROWID SUM$ (cr=2 pr=2 pw=0 time=65154 us cost=1 size=11 card=1)
1 1 1 INDEX UNIQUE SCAN I_SUM$_1 (cr=1 pr=1 pw=0 time=61664 us cost=0 size=0 card=1)(object id 986)
0 0 0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us cost=3 size=25 card=1)
0 0 0 INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)(object id 36)
28124 28124 28124 FILTER (cr=354 pr=0 pw=0 time=1094905 us)
28124 28124 28124 HASH JOIN (cr=354 pr=0 pw=0 time=911322 us cost=102 size=3402036 card=28116)
68 68 68 TABLE ACCESS FULL USER$ (cr=5 pr=0 pw=0 time=328 us cost=3 size=1224 card=68)
28124 28124 28124 HASH JOIN (cr=349 pr=0 pw=0 time=612040 us cost=99 size=2895948 card=28116)
68 68 68 INDEX FULL SCAN I_USER2 (cr=1 pr=0 pw=0 time=159 us cost=1 size=1496 card=68)(object id 47)
28124 28124 28124 TABLE ACCESS FULL OBJ$ (cr=348 pr=0 pw=0 time=147446 us cost=98 size=2277396 card=28116)
0 0 0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us cost=2 size=30 card=1)
0 0 0 INDEX SKIP SCAN I_USER2 (cr=0 pr=0 pw=0 time=0 us cost=1 size=20 card=1)(object id 47)
0 0 0 INDEX RANGE SCAN I_OBJ4 (cr=0 pr=0 pw=0 time=0 us cost=1 size=10 card=1)(object id 39)
1 1 1 NESTED LOOPS (cr=4 pr=0 pw=0 time=262 us cost=3 size=38 card=1)
1 1 1 TABLE ACCESS FULL LINK$ (cr=2 pr=0 pw=0 time=180 us cost=2 size=20 card=1)
1 1 1 TABLE ACCESS CLUSTER USER$ (cr=2 pr=0 pw=0 time=50 us cost=1 size=18 card=1)
1 1 1 INDEX UNIQUE SCAN I_USER# (cr=1 pr=0 pw=0 time=21 us cost=0 size=0 card=1)(object id 11)

Only the source tables are reported in the Row Source Operations section. All the blocks are reported in the EXECUTE phase. Why does the "query" count in the EXECUTE statistics differ from the "cr" count reported for the LOAD TABLE CONVENTIONAL. (LOAD TABLE CONVENTIONAL indicates a regular non-direct-path INSERT).
.
.
.

Categories: DBA Blogs

Database Change Notification Listener Implementation

Andrejus Baranovski - Sat, 2016-01-23 08:43
Oracle DB could notify client, when table data changes. Data could be changed by third party process or by different client session. Notification implementation is important, when we want to inform client about changes in the data, without manual re-query. I had a post about ADS (Active Data Service), where notifications were received from DB through change notification listener - Practical Example for ADF Active Data Service. Now I would like to focus on change notification listener, because it can be used in ADF not only with ADS, but also with WebSockets. ADF BC is using change notification to reload VO, when configured for auto refresh - Auto Refresh for ADF BC Cached LOV.

Change notification is iniatilized by DB and handled by JDBC listener. There are two important steps for such listener - start and stop. Sample application - WebSocketReusableApp.zip, implements servlet DBNotificationManagerImpl:


I'm starting and stopping DB change listener in servlet init and destroy methods:


DB listener initialization code is straightforward. Data changes are handled in onDatabaseChangeNotification method. Listener is registered againt DB table executed in initial query:


In my example I'm listening for changes in Employees table. Each time, when change happens - query is re-executed to calculate new average salary values for jobs:


On first load, when servlet is initialized - listener is registered. Initial data read happens and listener is reported as started:


I can change value in DB, commit it:


DB change listener is notified and query is being executed to fetch new averages. Value for SA_REP is changed:


I hope it makes it more clear how to use DB change listener. In my future posts I will describe how to push changes to JET through WebSocket.