- Speed up analytical queries
- Speed up OLTP transactions
- NO application changes
Without the In-Memory Advisor, a DBA has to manually identify the tables to be placed in the In-Memory Column Store (IMCS). This manual task is no more required as the IMA, analyzes the analytical workload of the database and produces a recommendation report (which includes SQL commands to place the tables in IMCS).
For more information on IMA please refer to MOS: 1965343.1 and you may also download the best practices white paper from here.
Hi, welcome to RDX! Nowadays, almost every company uses business intelligence tools. Whether measuring return on investment or identifying your most popular products, BI can be an integral part of your operation.
But how will the technology progress in 2015? For one thing, it’s likely that new iterations of relational databases will receive integrated analytics functions. SQL Server is one particular solution that has become more compatible with Power BI, Microsoft’s signature BI application.
Mobile analytics has garnered much attention, but, in general, most implementations aren’t as flashy as some users would like them to be. However, many companies are engineering their apps to perform data analysis on the backend. This means servers running SQL databases will do the heavy lifting.
Thanks for watching! If you want to know how BI tools can be integrated into your databases, consult a team of DBAs.
The post How will the BI industry progress in 2015? [VIDEO] appeared first on Remote DBA Experts.
You can find an overview of the feature in the Oracle Database Administrator's Guide. Basically, if you have flash devices attached to your server, you can use this flash memory to increase the size of the buffer cache. So instead of aging blocks out of the buffer cache and having to go back to reading them from disk, they move to the much, much faster flash storage as a secondary fast buffer cache (for reads, not writes).
Some scenarios where this is very useful : you have huge tables and huge amounts of data, a very, very large database with tons of query activity (let's say many TB) and your server is limited to a relatively small amount of main RAM - (let's say 128 or 256G). In this case, if you were to purchase and add a flash storage device of 256G or 512G (example), you can attach this device to the database with the Database Smart Flash Cache feature and increase the buffercache of your database from like 100G or 200G to 300-700G on that same server. In a good number of cases this will give you a significant performance improvement without having to purchase a new server that handles more memory or purchase flash storage that can handle your many TB of storage to live in flash instead of rotational storage.
It is also incredibly easy to configure.
-1 install Oracle Linux (I installed Oracle Linux 6 with UEK3)
-2 install Oracle Database 12c (this would also work with 11g - I installed 126.96.36.199.0 EE)
-3 add a flash device to your system (for the example I just added a 1GB device showing up as /dev/sdb)
-4 attach the storage to the database in sqlplus
$ ls /dev/sdb /dev/sdb $ sqlplus '/ as sysdba' SQL*Plus: Release 188.8.131.52.0 Production on Tue Feb 24 05:46:08 2015 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 184.108.40.206.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> alter system set db_flash_cache_file='/dev/sdb' scope=spfile; System altered. SQL> alter system set db_flash_cache_size=1G scope=spfile; System altered. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 4932501504 bytes Fixed Size 2934456 bytes Variable Size 1023412552 bytes Database Buffers 3892314112 bytes Redo Buffers 13840384 bytes Database mounted. Database opened. SQL> show parameters flash NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_flash_cache_file string /dev/sdb db_flash_cache_size big integer 1G db_flashback_retention_target integer 1440 SQL> select * from v$flashfilestat; FLASHFILE# ---------- NAME -------------------------------------------------------------------------------- BYTES ENABLED SINGLEBLKRDS SINGLEBLKRDTIM_MICRO CON_ID ---------- ---------- ------------ -------------------- ---------- 1 /dev/sdb 1073741824 1 0 0 0
You can get more information on configuration and guidelines/tuning here. If you want selective control of which tables can use or will use the Database Smart Flash Cache, you can use the ALTER TABLE command. See here. Specifically the STORAGE clause. By default, the tables are aged out into the flash cache but if you don't want certain tables to be cached you can use the NONE option.
alter table foo storage (flash_cache none);This feature can really make a big difference in a number of database environments and I highly recommend taking a look at how Oracle Linux and Oracle Database 12c can help you enhance your setup. It's included with the database running on Oracle Linux.
Here is a link to a white paper that gives a bit of a performance overview.
You would need to get Apple Development Provisioning Profile (this costs around $100), in order to be able to install MAF application on iPad device for testing. Provisioning Profile creation process is streamlined in iOS 8 and is simple to follow. Here is the example of our Apple Development Provisioning Profile entry, this can be download and installed on Mac OS with one click:
Sample MAF application I'm going to deploy is connecting to the REST service. Make sure to set proper IP address for the REST connection entry in MAF. IP must point to the Service Bus service with published REST connection:
JDeveloper 12c is fetching Provisioning Profile information automatically. You only need to copy paste Common Name from iOS development certificate into Signing Identity field (created and registered during Provisioning Profile creation process):
Make sure to specify the same Application Bundle Id prefix as the one registered in Provisioning Profile. Documentation states you can test MAF application on iPhone/iPad device only in Debug mode, however this is not true - it works fine in Release mode as well:
Thats it with configuration. Choose to deploy MAF application into IPA distribution package:
IPA distribution package file must be generated in deploy folder. Double click on it and it will get installed into iTunes:
Open iPhone/iPad section in iTunes and go to the App category. You should see your new MAF application listed there. Click on Install button and then press Synch - this will install application into the device:
Application is successfully loaded and dashboard screen is displayed. Service Bus provides REST data to the MAF application running on the iPad, data is rendered in Tree Map graph (MAF component):
User could switch to Employees screen:
Alta UI look and feel - we could search for employees and browse through a list with shortcuts:
Switch to cards view, instead of default list view:
Select employee who is a manager:
Pie graph with compensation of managed employees is displayed:
List of managed employees is also present:
I have tested AirPlay and connected iPad with Mac. This is useful to display iPad screen on projector, when you want to demonstrate your app to the audience. AirPlay synchronisation works pretty well, without configuration headache (you may require additional utility application for this). You must enable mirroring on your iPad device:
We are getting iPad screen view on the Mac. This is pretty useful for the presentations and demos:
Bagi mereka yang mengidap penyakit insomnia akan merasakan sulit ketika ingin tidur, bahkan sudah mencoba mengkonsumsi berbagai macam obat tidur namun tidak ada yang berfungsi. Jika anda adalah pelakunya sudahkan anda mencoba obat bius serbuk ini ? Cara Penggunaan Obat Bius SerbukUntuk penggunaan obat bius jenis serbuk ini terbilang cukup mudah, cukup campurkan soporific powder dengan berbagai macam minuman ataupun makanan, lalu biarkan hingga 3-5 menit dan biarkan obat ini bereaksi. Anda akan mulai merasakan kantuk yang sangat dahsyat sehingga membuat anda tertidur dengan sangat pulsa. Ingat ikuti aturan penggunaan ketika mengkonsumsi obat ini karena bila terlalu banyak mengkonsumsinya akan mengakibatkan overdosis.Kelebihan Obat Bius SerbukObat bius serbuk ini memiliki beberapa kelebihan diantaranya :
- Penggunaan yang cukup mudah, bisa digunakan di makanan ataupun minuman.
- Reaksi terbilang cepat, berkisar diantara 3-5 menit dan obat ini akan segera bereaksi dan membuat target tertidur pulas.
- Target akan tertidur pulas sehingga apapun yang terjadi padanya tidak akan diketahuinya ketika ia bangun.
- Harga relatif lebih murah dibandingkan obat bius lainnya.
- Memiliki reaksi yang berbeda dimana obat bius pada umumnya akan membuat target pingsan, namun pada obat ini hanya akan merasakan tidur yang teramat sangat pulas.
- Dan masih banyak kelebihan lainnya yang tidak bisa saya sebutkan satu persatu.
Most organizations now have SharePoint in one form or another within their organization, but 63% are somewhat stalled in their adoption and progress. Top biggest ongoing issues are persuading staff to use it, poor governance, and lack of internal expertise. It can’t just be left to IT, it requires the business managers and information workers to get involved to maximize the value.
Join your peers from organizations like Target, 3M, Medtronic and US Bank at this free 90 min meetup to learn how to avoid common mistakes and how to ensure success with SharePoint. Connect with local SharePoint experts and customers, and get the latest AIIM (Association for Information and Image Management) research from 400+ SharePoint deployments. Bring your tough questions and ask your colleagues and SharePoint experts for their advice and assistance.
Don't miss this opportunity to meet local SharePoint experts and customers.
- Identify the best way to get user adoption, governance, and business value
- Discuss how to best re-energize a stalled implementation
- Plan the role of SharePoint vs. 3rd party extensions and applications
- Describe best practices for upgrading and migrating to latest version
"If you work with your organization’s information or collaboration resources and technologies, you’ll surely find AIIM a treasure trove of
resources."- Andrew McAfee, Professor and author, Enterprise 2.0 and Race Against the Machine
"I find AIIM one of the very best resources for my job." - Larry Sanders, Supervisor at Woodmen of the World Life Insurance Society
“The range of information that AIIM is providing to our industry is nothing short of impressive and the Professional Membership sits at the heart of it.” - Hanns Köhler-Krüner, Research Director at Gartner
Register now to secure your spot - don't miss this free opportunity for education and networking!
Tuesday, March 3, 2015, 3:30-5:30PM CST
125 East 9th Street
Saint Paul, MN 55101
cannot set user id: Resource temporarily unavailable or Fork: Retry: Resource Temporarily Unavailable
cannot set user id: Resource temporarily unavailable
In the past he had reported this error:
Fork: Retry: Resource Temporarily Unavailable
This is due to the fact that the user has run out of free stacks. In OEL 6.x , the stack setting is not done in /etc/security/limits.conf but in the file:
The default content in the file is:
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc 1024
root soft nproc unlimited
I changed this to:
After$ cat /etc/security/limits.d/90-nproc.conf# Default limit for number of user's processes to prevent# accidental fork bombs.# See rhbz #432903 for reasoning.
* soft nproc 16384root soft nproc unlimited$
As soon as this change was made, Amjad was able to login.
Access Service Requests, knowledge documents, and bugs.
View and update Service Requests.
Search for Service Requests using Advanced Search or saved searches.
Manage, schedule and approve Change Requests (RFCs) for Managed Cloud Service customers.
Search the Knowledge Base, bugs, and the Oracle System Handbook.
Explore content about Accreditation, Advisor Webcasts, Social Media, Instrumentation, and other proactive services.
User Administrators (CUAs) can manage pending users.
Watch the video below for more information.
Who appreciates the display of feathers by a male peacock?
Female peacocks seem to get a kick out of them. They seem to play a role in mating rituals.
Who else? Why, humans, of course!
We know that humans greatly appreciate those displays, because of the aaahing and ooohing that goes on when we see them. We like those colors. We like the irridescence. We like the shapes and patterns.
If one were to speculate on why a female peacock gets all worked up about a particular male's feather display, we would inevitably hear about instinctual responses, hard-wiring, genetic determinism, and so on.
And if one were to speculate on why a human goes into raptures, we would then experience a major shift in explanation.
Time to talk about anything but a physiological, hard-wired sort of response.
No, for humans, the attraction has to do with our big brains, our ability to create and appreciate "art". And that is most definitely not something other animals do, right?
Oh, sure, right. Like these instinctive, hard-wired bowerbird mating nests:
That clearly has nothing to do with an aesthetic sense or "art". Just instinct.
Why? Because we humans say so. We just assert this "fact."
Most convenient, eh?
Building Better Content and Engagement
Click the image for more details
From a simple leaderboard written on a whiteboard to the sophisticated stats tracking of Oracle Fusion CRM, we are surrounded daily by "gamification" concepts.
In competitive games and sports, comparing stats against opponents and peers is all part of the fun. Organized chess play has long had an intricate rankings systems based on match performance. And how many of you are right now slipping in a quick peek at Words With Friends or Clash of Clans? (Tip: don't answer that.)Gamification in Business
"Gamification" has been something of a corporate buzzword for several years now. At its simplest it is a set of management tools designed to encourage employee and customer behaviors that add business value—but do it in a way that feels natural, intuitive, and fun.
It integrates the dynamics of games—scorekeeping, reward feedback, missions and goals—to an existing process or system by motivating member participation, engagement and loyalty.Oracle Community - 15.1 Rewards and Recognition Update
The Oracle Community platform uses a gamification system designed to:
- Broaden scope of knowledge (breadth and depth)
- Encourage participation by rewarding users for completing mission-based goals and objectives
- Recognize users when they add quality content
- Make it easier for other participants to find and evaluate highly rated content
The 15.1 release enhanced the existing system by adding new user "levels," visual perks, badges, and achievements. It gives participants a more flexible, fun way to share knowledge and work within in the community.Benefits Learn More
Great support communities derive the most value from the contributions of its users. The enhanced Rewards and Recognition program, makes it easier to recognize quality contributions and increases the value of the community for all involved.
If you're an Oracle customer or employee, we highly recommend checking out the new program.Resources
- Register for an Advisor Webcast to see how the changes improve the community.
- Watch a video to see what's new:
We'd love to hear from you about the new program!
If you're an Oracle customer, give us a heads up in the Community discussion thread.
If you're an Oracle employee, make your voice heard in the MOS Community employee feedback site, with the category: Gamification.
-The Oracle Community Team
By Phil HillMore Posts (291)
Last week I covered the announcement from Instructure that they had raised another $40 million in venture funding and were expanding into the corporate learning market. Today I was able to see a demo of their new corporate LMS, Bridge. While Instructure has very deliberately designed a separate product from Canvas, their education-focused LMS, you can see the same philosophy of market strategy and product design embedded in the new system. In a nutshell, Bridge is designed to a simple, intuitive platform that moves control of the learning design away from central HR or IT control and closer to the end user.
While our primary focus at e-Literate is on higher ed and even some K-12 learning, the development of professional development and corporate training markets are becoming more important even in the higher ed context. At the least, this is important for those who are tracking Instructure and how their company plans might affect the future of education platforms.
The core message of Instructure regarding Bridge – just as with Canvas – is that it is focused on ease-of-use whereas the entrenched competition has fallen prey to feature bloat based on the edge cases. Despite this claim and despite Instructure’s track record with Canvas, what does this mean? I’m pretty sure every vendor out there claims ease-of-use whether or not there are elegant or terrible designs.
Based on the demo, Bridge appears to define ease-of-use in three distinct areas – streamlined, clutter-free interface for learners, simple tools for content creation by business units, and simple tools for managing learners and content.Learner User Experience
Bridge has been designed over the past year based on Instructure’s design to avoid force-fitting Canvas into corporate learning markets. The core use cases of this new market are far simpler than education use cases, and the resultant product has fewer bells and whistles than Canvas. In Instructure’s view, the current market has such cumbersome products that learning platforms are mostly used just for compliance – take this course or you lose your job – and not at all for actual learning. The Bridge interface (shown alongside the mobile screen and on laptop) is simple.
While this is a clean interface, I don’t see it as being that big of a differentiator or rationale for a new product line.Content Creation
The content creation tools, however, start to show Instructure’s distinctive approach. They have made their living on being able to say no – refusing to let user requests for additional features to change their core design principle. The approach for Bridge is to assume that content creators have no need to have web design or instructional design experience, providing them with simple formatting and suggestion-based tools to make content creation easy. The formatting looks to be on the level of Google Docs, or basic WordPress, rather than Microsoft Word.
When creating new content, the Bridge LMS even puts up prompts for pre-formatted content types.
When creating quizzes, they have an interesting tool that adds natural language processing to facilitate simple questions that can be randomized. The author could write a simple sentence of what they are trying to convey to users, such as “Golden Gate Bridge is in San Francisco”. The tool selects each word and allows the author to add alternative objects that can serve in a quiz, such as suggesting San Mateo or San Diego (it is not clear if you can group words to replace the full “San Francisco” rather than “Francisco”). The randomized quiz questions could then be automatically created.
For content that is more complex, Instructure is taking the approach of saying ‘no’ – go get that content from a SCORM/AICC import coming from a more complex authoring tool.Learner Administration Tools
Rather than relying on complex HR systems to manage employees, Bridge goes with a CSV import tool that reminds me of Tableau in that it pre-imports, shows the fields, and allows a drag-and-drop selection and re-ordering of fields for the final import.
The system can also create or modify groups based on rules.
To pull this together, Bridge attempts to automate as much of the background process as is feasible. To take one example, when you hire a new employee or change the definition of groups, the system retroactively adds the revised list of learners or groups to assigned courses.
For live training, you can see where Bridge takes the opposite approach to Canvas. In Canvas (as with most education LMSs), it is assumed that more time in the system means more time learning – the core job of learners. In Bridge, however, the assumption is that LMS time-on-task should be minimized. For compliance training in particular, you want the employee to spend as little time as reasonable training so they can get their real job done. Bridge focuses not on the live training itself but rather on the logistics tasks in setting up the course (scheduling, registering, taking attendance).
Taken together, the big story here is that Instructure seeks to change the situation where learning management in corporations is cloistered within HR, IT and instructional design units.. As they related today, they want to democratize content creation and center learning in the business units where the subject matter experts reside.
Their future plans focus on engagement – getting feedback and dialogue from employees rather than just one-way content dissemination and compliance. If they are successful, this is where they will gain lasting differentiation in the market.
What does this mean from a market perspective? Although I do not have nearly as much experience with corporate training as I do with higher education, this LMS seems like a real system and a real market entry into corporate learning. The primary competitors in this space are not Blackboard, as TechCrunch and Buzzfeed implied, but are Saba, SumTotal, SuccessFactors, Cornerstone, etc. Unlike education, this is a highly fragmented market. I suspect that this means that the growth prospects for Instructure will be slower than in education, but real nonetheless. Lambda Solutions shared the Bersin LMS study to give a view of the market.
This move is clearly timed to help with Instructure’s planned IPO that could happen as soon as November 2015. Investors can now see potential growth in an adjacent market to ed tech where they have already demonstrated growth.
I mentioned in my last post that the biggest risk I see is management focus and attention. I suspect with their strong fund-raising ($90 million to date) that the company has enough cash to hire staff for both product lines, but senior management will oversee both the Canvas and the Bridge product lines and markets.
- Although I would love to see the honest ad: “With a horrible, bloated user interface based on your 300-item RFP checklist!”
- I assume they can integrate with HR systems as well, but we did not discuss this aspect.
- Note this is based on my heuristic analysis and not from Instructure employees.
The post First View of Bridge: The new corporate LMS from Instructure appeared first on e-Literate.
1. Firstly install the Spring Boot CLI. From mac use brew as follows
pas@192-168-1-4:~$ brew tap pivotal/tap
Cloning into '/usr/local/Library/Taps/pivotal/homebrew-tap'...
remote: Counting objects: 366, done.
remote: Total 366 (delta 0), reused 0 (delta 0), pack-reused 366
Receiving objects: 100% (366/366), 60.09 KiB | 84.00 KiB/s, done.
Resolving deltas: 100% (195/195), done.
Checking connectivity... done.
Tapped 8 formulae
pas@192-168-1-4:~$ brew install springboot
==> Installing springboot from pivotal/homebrew-tap
==> Downloading https://repo.spring.io/release/org/springframework/boot/spring-boot-cli/1.2.1.RELEASE/spring-boot-cli-1.2.1.RELEASE-bin.tar.gz
Bash completion has been installed to:
zsh completion has been installed to:
Oracle Big Data Discovery was released last week, the latest addition to Oracle’s big data tools suite that includes Oracle Big Data SQL, ODI and it’s Hadoop capabilities and Oracle GoldenGate for Big Data 12c. Introduced by Oracle as “the visual face of Hadoop”, Big Data Discovery combines the data discovery and visualisation elements of Oracle Endeca Information Discovery with data loading and transformation features built on Apache Spark to deliver a tool aimed at the “Discovery Lab” part of the Oracle Big Data and Information Management Reference Architecture.
Most readers of this blog will probably be aware of Oracle Endeca Information Discovery, based on the Endeca Latitude product acquired as part of the Endeca aquisition. Oracle positioned Endeca Information Discovery (OEID) in two main ways; on the one hand as a data discovery tool for textual and unstructured data that complemented the more structured analysis capabilities of Oracle Business Intellligence, and on the other hand, as a fast click-and-refine data exploration tool similar to Qlikview and Tableau.
The problem for Oracle though was that data discovery against files and documents is a bit of a “solution looking for a problem” and doesn’t have a naturally huge market (especially considering the license cost of OEID Studio and the Endeca Server engine that stores and analyzes the data), whereas Qlikview and Tableau are significantly cheaper than OEID (at least at the start) and are more focused on BI-type tasks, making OEID a good too but not one with a mass market. To address this, whilst OEID will continue as a standalone tool the data discovery and unstructured data analysis parts of OEID are making their way into this new product called Oracle Big Data Discovery, whilst the fast click-and-refine features will surface as part of Visual Analyzer in OBIEE12c.
More importantly, Big Data Discovery will run on Hadoop making it a solution for a real problem – how to catalog, explore, refine and visualise the data in the data reservoir, where data has been landed that might be in schema-on-read databases, might need further analysis and understanding, and users need large-scale tooling to extract the nuggets of information that in time make their way into the “Execution” part of the Big Data and Information Management Reference Architecture. As some who’s admired the technology behind Endeca Information Discovery but sometimes struggled to find real-life use-cases or customers for it, I’m really pleased to see its core technology applied to a problem space that I’m encountering every day with Rittman Mead’s customers.
In this first post, I’ll look at how Big Data Discovery is architected and how it works with Cloudera CDH5, the Hadoop distribution we use with our customers (Hortonworks HDP support is coming soon). In the next post I’ll look at how data is loaded into Big Data Discovery and then cataloged and transformed using the BDD front-end; then finally, we’ll take a look at exploring and analysing data using the visual capabilities of BDD evolved from the Studio tool within OEID. Oracle Big Data Discovery 1.0 is now GA (Generally Available) but as you’ll see in a moment you do need a fairly powerful setup to run it, at least until such time as Oracle release a compact install version running on VM.
To run Big Data Discovery you’ll need access to a Hadoop install, which in most cases will consist of 6 (minumum 3 or 4, but 6 is the minimum we use) to 18 or so Hadoop nodes running Cloudera CDH5.3. BDD generally runs on its own server nodes and itself can be clustered, but for our setup we ran 1 BDD node alongside 6 CDH5.3 Hadoop nodes looking like this:
Oracle Big Data Discovery is made up of three component types highlighted in red in the above diagram, two of which typically run on their own dedicated BDD nodes and another which runs on each node in the Hadoop cluster (though there are various install types including all on one node, for demo purposes)
- The Studio web user interface, which combines the faceted search and data discovery parts of Endeca Information Discovery Studio with a lightweight data transformation capability
- The DGraph Gateway, which brings Endeca Server search/analytics capabilities to the world of Hadoop, and
- The Data Processing component that runs on each of the Hadoop nodes, and uses Hive’s HCatalog feature to read Hive table metadata and Apache Spark to load and transform data in the cluster
The Studio component can run across several nodes for high-availability and load-balancing, which the DGraph element can run on a single node as I’ve set it up, or in a cluster with a single “leader” node and multiple “follower” nodes again for enhanced availability and throughput. The DGraph part them works alongside Apache Spark to run intensive search and analytics on subsets of the whole Hadoop dataset, with sample sets of data being moved into the DGraph engine and any resulting transformations then being applied to the whole Hadoop dataset using Apache Spark. All of this then runs as part of the wider Oracle Big Data product architecture, which uses Big Data Discovery and Oracle R for the discovery lab and Oracle Exadata, Oracle Big Data Appliance and Oracle Big Data SQL to take discovery lab innovations to the wider enterprise audience.
So how does Oracle Big Data Discovery work in practice, and what’s a typical workflow? How does it give us the capability to make sense of structured, semi-structured and unstructured data in the Hadoop data reservoir, and how does it look from the perspective of an Oracle Endeca Information Discovery developer, or an OBIEE/ODI developer? Check back for the next parts in this three part series where I’ll first look at the data transformation and exploration capabilities of Big Data Discovery, and then look at how the Studio web interface brings data discovery and data visualisation to Hadoop.
I was going to write a bit up about it but Manfred Moser from Sonatype has already put together a blog and video on it:
With the new Nexus 2.11.2 release we are supporting the authentication mechanism used for the Oracle Maven repository in both Nexus OSS and Nexus Pro. This allows you to proxy the repository in Nexus and makes the components discoverable via browsing the index as well as searching for components. You will only need to set this up once in Nexus and all your projects. Developers and CI servers get access to the components and the need for any manual work disappears. On the Nexus side, the configuration changes can be done easily as part of your upgrade to the new release.
Check it out @ Using the Oracle Maven Repository with Nexus
Run your application and click the Theme Roller link in the APEX Developer Toolbar:
Theme Roller will open. I won't go in every section, but want to highlight the most important sections in Theme Roller in this post:
- Style: there're two styles that come with APEX 5.0: Blue and Gray. You can start from one of those and see how your application changes color. It will set predefined colors for the different parts of the templates.
- Color Wheel: when you want to quickly change your colors, an easy way to see different options is by using the Color Wheel. You've two modes: monochroom (2 points) and dual color (3 points - see screenshot). By changing one point it will map the other point to a complementary color. Next you can move the third point to play more with those colors.
- Global Colors: if the Color Wheel is not specific enough for what you need, you can start by customising the Global Colors. Those are the main colors of the Universal Theme and used to drive the specific components. You can still customise the different components e.g. the header by clicking further down in the list (see next screenshot).
- Containers etc. will allow you to define the specific components. A check icon will say it's the standard color coming with the selected style. An "x" means the color was changed and an "!" means the contrast is probably not great.
This is just awesome... but what if you don't like the changes you did?
Luckily you can Reset either your entire style or you can refresh the specific section by clicking the refresh icon. There's also an undo and redo button. But that is not all... for power users when you hit "ALT" when hovering a color you can just reset that color! (only that color will get a refresh icon in it and clicking it will reset it)
Note that all changes you're making are locally stored on your computer in your browsers cache (HTML5 local storage), so you don't effect other users by playing with the different colors.
Finally when you are done editing your color scheme you can hit the Save As button to save all colors to a new Style. When you close Theme Roller the style will go back how it was.
The final step to apply the new style so everybody sees that version, is to go to User Interface Details (Shared Components) and set the style to the new one.
Note that this blog post is written based on APEX 5.0 EA3, in the final version of APEX 5.0 (or 5.1) you might apply the new style from Theme Roller directly.
Want to know more about Theme Roller and the Universal Theme - we're doing an APEX 5.0 UI Training May 12th in the Netherlands.
Database Cloud Service
- Orderinga Trial Subscription to an Oracle Cloud Service
- Administering a Trial Subscription to an Oracle Cloud Service
- Creating a Database Cloud Instance
- Connecting to a Database Instance in the Oracle Database Cloud Service
- Vidéos complémentaires sur le site Web OPC
Java Cloud Service
Do you have ever seen the following message while you’re trying to validate your cluster configuration with your availability groups or FCI’s and Windows Server 2012 R2?
Microsoft recommends to add a witness even if you have only two cluster members with dynamic weights. This recommendation may make sense regarding the new witness capabilities. Indeed, Windows 2012 R2 improves the quorum resiliency with the new dynamic witness behavior. However, we need to take care about it and I would like to say at this point that I’m reluctant to recommend to meet this requirement with a minimal cluster configuration with only 2 nodes. In my case, it’s very usual to implement SQL Server AlwaysOn and availability groups or FCI’s architectures with only two cluster nodes at customer places. Let’s talk about the reason in this blog post.
First of all, let’s demonstrate why I don’t advice my customers to implement a witness by following the Microsoft recommendation. In my case it consists in adding a file share witness on my existing lab environment with two cluster nodes that use the dynamic weight behavior:
Now let’s introduce a file share witness (\DC2WINCLUST-01) as follows:
We may notice after introducing the FSW that the node weight configuration has changed:
The total number of votes equals 3 here because we are in the situation where we have an even number of cluster members plus the witness. As a reminder, we are supposed to use a dynamic witness feature according to the Microsoft documentation here.
In Windows Server 2012 R2, if the cluster is configured to use dynamic quorum (the default), the witness vote is also dynamically adjusted based on the number of voting nodes in current cluster membership. If there is an odd number of votes, the quorum witness does not have a vote. If there is an even number of votes, the quorum witness has a vote.
The quorum witness vote is also dynamically adjusted based on the state of the witness resource. If the witness resource is offline or failed, the cluster sets the witness vote to "0."
The last sentence draws my attention and now let’s introduce a failure of the FSW. In my case I will just turn off the share used by my WFSC as follows:
As expected, the file share witness state has changed from online to failed state by the resource control manager:
At this point, according to the Microsoft documentation, we may expect that the WitnessDynamicWeight property will change by the cluster but to my surprise, this was not the case:
In addition, after taking a look at the cluster log I noticed this sample among the entire log records:000014d4.000026a8::2015/02/20-12:45:43.594 ERR [RCM] Arbitrating resource 'File Share Witness' returned error 67 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [RCM] Res File Share Witness: OnlineCallIssued -> ProcessingFailure( StateUnknown ) 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [RCM] TransitionToState(File Share Witness) OnlineCallIssued-->ProcessingFailure. 000014d4.00001ea0::2015/02/20-12:45:43.594 INFO [GEM] Node 1: Sending 1 messages as a batched GEM message 000014d4.000026a8::2015/02/20-12:45:43.594 ERR [RCM] rcm::RcmResource::HandleFailure: (File Share Witness) 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [QUORUM] Node 1: PostRelease for ac9e0522-c273-4da8-99f5-3800637db4f4 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [GEM] Node 1: Sending 1 messages as a batched GEM message 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [QUORUM] Node 1: quorum is not owned by anyone 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [RCM] resource File Share Witness: failure count: 0, restartAction: 0 persistentState: 1. 000014d4.00001e20::2015/02/20-12:45:43.594 INFO [GUM] Node 1: executing request locally, gumId:281, my action: qm/set-node-weight, # of updates: 1 000014d4.000026a8::2015/02/20-12:45:43.594 INFO [RCM] numDependents is zero, auto-returning true 000014d4.00001e20::2015/02/20-12:45:43.594 WARN [QUORUM] Node 1: weight adjustment not performed. Cannot go below weight count 3 in a hybrid configuration with 2+ nodes
The latest line (highlighted in red) is the most important. I guess here that “hybrid configuration” means my environment includes 2 cluster nodes and one witness (regarding its type). An interesting thing to notice is a potential limitation that exists for the dynamic witness behavior that cannot be performed below two cluster nodes. Unfortunately, I didn’t find any documentation from Microsoft about this message. Is it a bug or just a missing entry to the documentation or have I overlook something concerning the cluster behavior? At this point I can’t tell anything and I hope to get soon a response from Microsoft. The only thing I can claim at this point is that if I lose a cluster node, the cluster availability will be compromised. This reproduced issue is not specific on my lab environment and I faced the same behavior several times at my customers.
Let’s demonstrate by issuing a shutdown of one of my cluster node. After a couple of seconds, connection with my Windows failover cluster is lost and here what I found by looking at the Windows event log:
As I said earlier, at this point, with minimal configuration with two cluster nodes, I always recommend to my customers to skip this warming. After all, having only two cluster members with dynamic quorum behavior is sufficient to get a good quorum resiliency. Indeed, according to the Microsoft documentation to allow the system to re-calculate correctly the quorum, a simultaneous failure of a majority of voting members should not occur (in others words, the failure of cluster members must be sequential) and with two cluster nodes we may only lose one node at the same time in all cases.
What about more complex environments? Let’s say a FCI with 4 nodes (two cluster nodes on each datacenter) and a file share witness on the first datacenter. In contrast, in this case, if the file share witness fails, the cluster will adjust correctly the overall node weight configuration both on the cluster nodes and on the witness. This is completely consistent with the message found above: "Cannot go below weight count 3".
The bottom line is that the dynamic witness feature is very useful but you have to take care about its behavior with minimal configurations based on only two cluster nodes which may introduce unexpected results in some cases.
Happy cluster configuration!
Red Samurai ADF Performance Audit Tool v 3.4 - ADF Task Flow Statistics with Oracle DMS Servlet Integration
DMS Spy Servlet context is accessed in certain intervals and we are not only displaying DMS data, but storing it inside our audit tables. This allows to keep historical data and preserve it between WLS restarts - this is not possible with DMS Spy Servlet alone.
New tree map graph displays ADF Task Flow usage in the application - larger box, more frequently Task Flow is accessed:
Graph is clickable, user could select a box and detail data for the Task Flow will be displayed. We are displaying a number of Active/Maximum Active Task Flows over time. Average load time is logged and displayed - this allows to identify Task Flows with slow performance:
Here you can check information about previous v 3.3 version - Red Samurai ADF Performance Audit Tool v 3.3 - Audit Improvements.
Hi, welcome to RDX! SQL injections have been around for some time. However, they’re not necessarily outdated. Cybersecurity experts have noted that hackers are still using SQL injections to infiltrate databases.
Although the number of SQL injection-based attacks declined steadily over the past several years, 2014 saw a sharp uptick of such instances. DB Networks blamed the deadlines and cost constraints many software development projects operate under. These restrictions sometimes cause engineers to skimp on the back-end security components necessary to maintain application integrity.
The question is, are your databases open to SQL injections? Have a team of DBAs assess your software’s data transaction algorithms. Scrutinize every SQL query your applications initiate, and you’ll be able to identify any problem areas that may leave you open to attack.
Thanks for watching! Check in next time for more SQL security news!
Next-Gen Digital Experiences
Thursday February 26, 2015 11:00am PT / 2:00pm ET The world has changed to one that's always on, always-engaged, requiring organizations to rapidly become "digital businesses." In order to thrive and survive in this new economy, having the right digital experience and engagement strategy and speed of execution is crucial. So how do you get started and accelerate this transformation?
Attend this webcast as we outline best practice strategies to seize the full potential of digital experience & engagement - to deliver the next wave of revenue growth, service excellence and business efficiency. You will gain deep insights into how you can engage your customers, partners and employees for maximum results by empowering both marketing and IT with increased business agility and responsiveness.
REGISTER NOW to reserve your seat for this special webinar event. MODERATOR Theresa Cramer
EContent & Intranets PRESENTERS Chris Preston
Sr. Director Customer Strategies
Oracle Kellsey Ruppel
Principal Product Marketing Director
Oracle Audio is streamed over the Internet, so turn up your computer speakers!