Skip navigation.

Feed aggregator

Oracle Database RAC Diagnostics and Tuning

Oracle Real Application Clusters (Oracle RAC) is a clustered version of Oracle Database based on a comprehensive high-availability stack that can be used as the foundation of a database cloud system...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Priority Service Infogram for 07-AUG-2014

Oracle Infogram - Thu, 2014-08-07 15:05

OpenWorld
It’s closing on us fast! From Proactive Support - Portals: Learn, Connect and Explore at Oracle OpenWorld 2014
Security
Five Computer Security Myths, Debunked by Experts, from LifeHacker.
A new white paper: Mobile Security in a BYOD World.
Oracle issues a virtual strongbox for enterprise encryption keys, from PCWorld.
Neowin is the bearer of good news: CryptoLocker malware has been cracked, victims able to decrypt their files for free.
RDBMS
From Oracle DB/EM Support: Master Note for ORA-1555 Errors.
SQL
From Galo Balda's Blog: New in Oracle 12c: Querying an Associative Array in PL/SQL Programs.
Solaris
From OSTATIC: Oracle Delivers Solaris 11.2 with OpenStack, Integrated SDN Features.
HA-LDOM live migration in Oracle Solaris Cluster 4.2, from the Oracle Solaris Cluster Oasis.
Java
From The Java Source: Tech Article: Generics: How They Work and Why They Are Important.
From DZone: Using Oracle AQ in Java Won’t Get Any Easier Than This.
EPM
Infogram contributor Yin-Jen Su pointed out this great YouTube channel: Oracle EPM Products YouTube channel.
Here’s the announcement from Jan Greenburg:
I’m pleased to announce 3 new videos on our YouTube Channel (https://www.youtube.com/user/OracleEPMWebcasts/videos)!
For Oracle Planning and Budgeting Cloud Service (PBCS): Using Predictive Planning  -- demonstrates generating statistical predictions based on historical data.Managing Database Properties- - demonstrates accessing database properties.For on-premise EPM:
Four-part series on deploying EPM System Products:Part 1 Overview -- demonstrates the standard deployment methodology. It contains links to parts 2, 3 and 4.Part 2 Preparing for Deployment -- demonstrates preparing for standard deployment.Part 3 Installing and Configuring an Initial Instance -- demonstrates installing and configuring an initial instance. Part 4 Scaling Out and Installing EPM System Clients -- demonstrates scaling EPM System components and installing EPM System client software.
Fyi … in addition to accessing videos from our YouTube channel, you can also access our videos from these Oracle Learning Libraries (OLL):
EPM Consolidation and Planning Videos Oracle Learning Library (on premise videos): https://apex.oracle.com/pls/apex/f?p=44785:141:25017935607895::NO:141:P141_PAGE_ID%2CP141_SECTION_ID:133%2C959Oracle Planning and Budgeting Cloud Service Library: https://apex.oracle.com/pls/apex/f?p=44785:141:108373392382468::NO:141:P141_PAGE_ID%2CP141_SECTION_ID:91%2C658
OLL provides social networking capabilities that allow you to bookmark, share items through social media, review items, recommend items and create collections that can be private or public.
Oracle Community
Lots of goings on at Oracle Community.
EBS
From the Oracle E-Business Suite Support Blog:
Introducing the Trading Community Architecture APIs Information Center
Value Chain Planning, Advanced Supply Chain Planning, and Inventory Optimization Safety Stock
Use the Item Open Interface to Quickly Add or Update Inventory Items
So How is Everyone Doing Submitting Enhancements in the Procurement Community?
StartUP Demantra, Configuration & Troubleshooting, Steps, Tips & Tricks
Oracle Application Management Pack for Oracle E-Business Suite (AMP) Release 12.1.0.3.0 is Available
Overview of Inventory Transaction Accounting, Part 1 of 3
New Upgrade, Patching & Maintenance Advisors for R12.2.x
Guided Resolution Now Available for Cancel or Discard Issues!
What's New in the My Oracle Support Community (MOSC)
From Oracle E-Business Suite Technology
JRE 1.7.0_67 Certified with Oracle E-Business Suite
New Solaris SPARC OS Requirements for EBS R12
Business
10 Things Speakers Should Never Do, from collaborate.
…and Finally

An Oracle that was not us: The Oak Ridge Automatic Computer and Logical Engine (ORACLE), Oak Ridge National Laboratory, 1953, from Adafruit.

Create with WLST a SOA Suite, Service Bus 12.1.3 Domain

Edwin Biemond - Thu, 2014-08-07 13:14
When you want to create a 12.1.3 SOA Suite, Service Bus Domain, you have to use the WebLogic config.sh utility.  The 12.1.3 config utility is a big improvement when you compare this to WebLogic 11g. With this I can create some complex cluster configuration without any after configuration. But if you want to automate the domain creation and use it in your own (provisioning) tool/script then you

Space used by objects

DBA Scripts and Articles - Thu, 2014-08-07 12:35

Calculate the space used by a single object This script will help you calculate the size of a single object : [crayon-53f40af9f30d6610532020/] Calculate the space used by a whole schema If you want the space used by a whole schema, then here is a variation of the first query : [crayon-53f40af9f30e6161562003/]

The post Space used by objects appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

ADF Thematic Map in ADF 12c (12.1.3)

Andrejus Baranovski - Thu, 2014-08-07 12:03
ADF Thematic Map component from DVT library was updated in ADF 12c with marker zoom option and area layer styling (ADF 12c (12.1.3) new features). I have decided to check how it works and implemented quick sample application - ThematicMapApp.zip.

I was using world GDP data (SQL script is available together with sample application) and displayed it using ADF Thematic Map. World country borders are hidden on purpose, borders are visible by default:


While zooming, marker points are growing - very useful feature:


Data for thematic map is fetched using SQL based VO. I'm calculating total GDP and taking percentage from total for the country, this allows to scale marker points better:


ADF Thematic Map is configured to support zooming for markers:


Countries area layer is set to be hidden, although data is still gets attached to the countries:


Marker is configured with dynamic scaling, this is how each country gets appropriate marker size, based on its GDP value:


Marker colour property is set to be dynamically calculated in marker Attribute Groups section:

Whistler, Microsoft and how far cloud has come

Steve Jones - Thu, 2014-08-07 09:00
In six years Microsoft has come from almost zero corporate knowledge about how cloud computing works to it being an integral part of their strategy.  Sure back in early 2008 there were some pieces of Microsoft that knew about cloud but that really wasn't a corporate view it was what a very few people inside the company knew. How do I know this? Well back in 2008 I was sitting on the top of a
Categories: Fusion Middleware

Oracle WebCenter Case Study: Improving Invoice to Cash Process

WebCenter Team - Thu, 2014-08-07 08:51

<span id="XinhaEditingPostion"></span>

 Kevin is the IT Director for a top-quality Less-than-Truckload carrier servicing eight Midwestern states. 

A recent industry survey showed the company’s website was falling short of customer expectations in the following three areas: 

  • Ease of use 
  • Providing useful information, and 
  • Utilizing effective technology and tracking systems

Kevin looked to Redstone Content Solutions to improve the website’s functionality utilizing award-winning Oracle WebCenter technologies. 

Actian Vector Hadoop Edition

DBMS2 - Thu, 2014-08-07 05:12

I have a small blacklist of companies I won’t talk with because of their particularly unethical past behavior. Actian is one such; they evidently made stuff up about me that Josh Berkus gullibly posted for them, and I don’t want to have conversations that could be dishonestly used against me.

That said, Peter Boncz isn’t exactly an Actian employee. Rather, he’s the professor who supervised Marcin Zukowski’s PhD thesis that became Vectorwise, and I chatted with Peter by Skype while he was at home in Amsterdam. I believe his assurances that no Actian personnel sat in on the call. :)

In other news, Peter is currently working on and optimistic about HyPer. But we literally spent less tana minute talking about that

Before I get to the substance, there’s been a lot of renaming at Actian. To quote Andrew Brust,

… the ParAccel, Pervasive and Vectorwise technologies are being unified under the Actian Analytics Platform brand. Specifically, the ParAccel technology … is being re-branded Actian Matrix; Pervasive’s technologies are rechristened Actian DataFlow and Actian DataConnect; and Vectorwise becomes Actian Vector.

and

Actian … is now “one company, with one voice and one platform” according to its John Santaferraro

The bolded part of the latter quote is untrue — at least in the ordinary sense of the word “one” — but the rest can presumably be taken as company gospel.

All this is by way of preamble to saying that Peter reached out to me about Actian’s new Vector Hadoop Edition when he blogged about it last June, and we finally talked this week. Highlights include: 

  • Vectorwise, while being proudly multi-core, was previously single-server. The new Vector Hadoop Edition is the first version with node parallelism.
  • Actian’s Vector Hadoop edition uses HDFS (Hadoop Distributed File System) and YARN to manage an Actian-proprietary file format. There is currently no interoperability whereby Hadoop jobs can read these files. However …
  • … Actian’s Vector Hadoop edition relies on Hadoop for cluster management, workload management and so on.
  • Peter thinks there are two paying customers, both too recent to be in production, who between then paid what I’d call a remarkable amount of money.*
  • Roadmap futures* include:
    • Being able to update and indeed trickle-update data. Peter is very proud of Vectorwise’s Positional Delta Tree updating.
    • Some elasticity they’re proud of, both in terms of nodes (generally limited to the replication factor of 3) and cores (not so limited).
    • Better interoperability with Hadoop.

Actian actually bundles Vector Hadoop Edition with DataFlow — the old Pervasive DataRush — into what it calls “Actian Analytics Platform – Hadoop SQL Edition”. DataFlow/DataRush has been working over Hadoop since the latter part of 2012, based on a visit with my then clients at Pervasive that December.

*Peter gave me details about revenue, pipeline, roadmap timetables etc. that I’m redacting in case Actian wouldn’t like them shared. I should say that the timetable for some — not all — of the roadmap items was quite near-term; however, pay no attention to any phrasing in Peter’s blog post that suggests the roadmap features are already shipping.

The Actian Vector Hadoop Edition optimizer and query-planning story goes something like this:

  • Vectorwise started with the open-source Ingres optimizer. After a query is optimized, it is rewritten to reflect Vectorwise’s columnar architecture. Peter notes that these rewrites rarely change operator ordering; they just add column-specific optimizations, whatever that means.
  • Now there are rewrites for parallelism as well.
  • These rewrites all seem to be heuristic/rule-based rather than cost-based.
  • Once Vectorwise became part of the Ingres company (later renamed to Actian), they had help from Ingres engineers, who helped them modify the base optimizer so that it wasn’t just the “stock” Ingres one.

As with most modern MPP (Massively Parallel Processing) analytic RDBMS, there doesn’t seem to be any concept of a head-node to which intermediate results need to be shipped. This is good, because head nodes in early MPP analytic RDBMS were dreadful bottlenecks.

Peter and I also talked a bit about SQL-oriented HDFS file formats, such as Parquet and ORC. He doesn’t like their lack of support for columnar compression. Further, in Parquet there seems to be a requirement to read the whole file, to an extent that interferes with Vectorwise’s form of data skipping, which it calls “min-max indexing”.

Frankly, I don’t think the architectural choice “uses Hadoop for workload management and administration” provides a lot of customer benefit in this case. Given that, I don’t know that the world needs another immature MPP analytic RDBMS. I also note with concern that Actian has two different MPP analytic RDBMS products. Still, Vectorwise and indeed all the stuff that comes out Martin Kersten and Peter’s group in Amsterdam has always been interesting technology. So the Actian Vector Hadoop Edition might be worth taking a look at before you redirect your attention to products with more convincing track records and futures.

Categories: Other

ECEMEA Webcast - Getting Started with your Big Data project

Big data is a new kind of power that transforms everything it touches in business, government, and private life. As a result, bringing big data to your company has the potential to provide big...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Documentum: some useful queries (DQL/IAPI)

Yann Neuhaus - Thu, 2014-08-07 02:37

In this blog post I want to sharing some useful DQL and IAPI queries that I am using all the time. They are more dedicated to Documentum support/debug and grouped by components. In order to use them, I recommend Qdman: it is the best tool I know to use DQL and IAPI scripts.

1. xPlore

It regroups both the search engine and the index management.

DQL Dsearchselect * from dm_document search document contains 'manual.doc'

 

This query will perform a search just as if you put the 'manual.doc' in the search field of DA. It can be used to check if the dsearch is working fine and if the indexing has been performed correctly. If not, it will return 0 results for a document that you know does exist in the docbase.

Index Waitingselect count(*) as awaiting_15 FROM dmi_queue_item a, dm_sysobject (all) b WHERE b.r_object_id = a.item_id AND a.name='dm_fulltext_index_user' AND date_sent > date(now)-(15*1/24/60) AND (a.task_state is NULL or a.task_state = ' ')

 

This query will return the number of items waiting to be indexed since 15 minutes. The parameter can be changed to 60 minutes or whatever, you just have to change the '15' in bold in the previous query.

Index In Progressselect count(*) as in_progress_15 FROM dmi_queue_item a, dm_sysobject (all) b WHERE b.r_object_id = a.item_id AND a.name='dm_fulltext_index_user' AND date_sent > date(now)-(15*1/24/60) AND a.task_state = 'acquired'

 

This one is similar to the previous, but it returns the number of 'in progress' indexing requests. Note that the parameter can still ne changed.

Index By Stateselect task_state,count(*) from dmi_queue_item where name = 'dm_fulltext_index_user' group by task_state

 

This query lists the number of indexes by state:

  • blank -> awaiting indexing
  • aquired -> in progress
  • warning
  • error
  • done

Delete Indexing Requestdelete dmi_queue_item object where item_id='09xxxxxxxxxxxxxx'

 

Sometimes I noticed there are indexing requests on deleted documents. In fact, it can happen if someone saved a document, then deleted it right after. The indexing request remains in the queue for life. Thus, you may want to delete it. First, check if the file is deleted by running the IPAI: dump,c,09xxxxxxxxxxxxxx. If an error occurs telling the document doesn't exist anymore, you can delete it.

Index Agent Infoselect fti.index_name,iac.object_name as instance_name from dm_fulltext_index fti, dm_ftindex_agent_config iac where fti.index_name = iac.index_name and fti.is_standby = false and iac.force_inactive = false

 

This query returns your configured index agent information. It is useful for the IAPI command returning the index agent status (see below).

Index Agent Statusapply,c,NULL,FTINDEX_AGENT_ADMIN,NAME,S,DOCBASE_ftindex_01,AGENT_INSTANCE_NAME,S, HOST._9200_IndexAgent,ACTION,S,status
next,c,q0
get,c,q0,status
close,c,q0

 

This script returns the Index Agent Status (Running, Stopped, and so on). Note that you have to replace the Indey Agent information in the first line by your Index Agent. You can get these information thanks to the DQL query above.

Manually Queue Index Requestqueue,c,09xxxxxxxxxxxxxx,dm_fulltext_index_user,dm_force_ftindex

 

This one simply puts an indexing request in the queue. You have to replace 09xxxxxxxxxxxxxxx by the r_object_id of the document you want to queue.

Display Dsearch Portcat $DOCUMENTUM/dsearch/admin/xplore.properties | grep port

 

For this one you have to go to the xPlore server, it shows the configured dsearch port.

2. Rendering (ADTS)

The following queries concern the rendition component. It regroups the rendition queue check and the way to manually ask a rendition through IAPI.

 

Manually Queue Rendering Requestqueue,c,09xxxxxxxxxxxxxx,dm_autorender_win31,rendition,0,F,,rendition_req_ps_pdf

 

As the indexing request, this one puts a PDF rendition request in the queue. It can be useful when scripting or in DFC programs.

Rendering By Stateselect task_state,count(*) from dmi_queue_item where name = 'dm_autorender_win31' group by task_state

 

This returns the rendition requests by state.

Rendering Queueselect * from dmi_queue_item where name ='dm_autorender_win31' order by date_sent desc

 

This query returns all documents present in the rendering queue. That means all document waiting for rendition.

Rendition Failedselect r_object_id,item_id,name,item_name,date_sent from dmi_queue_item where event ='DTS' order by date_sent desc

 

This Query returns the failed renditions. Be aware of the date_sent field, because this queue is not cleared. This means that if a rendition request failed 3 times in a row and succeed the last time, there will be 3 rows in the failed queue, but the rendition did succeed. So you should verify that the rendition did succeed and if so, you can delete the row form of the failed queue.

Check Rendition Successfulselect r_object_id from dm_document where object_name='DOCUMENT' and exists(select * from dmr_content where any parent_id=dm_document.r_object_id and full_format='pdf')

 

This query checks if a rendition is present for the given DOCUMENT name. If the pdf rendition exists, it returns its r_object_id. If no rendition is present for the given document, it returns nothing.

3. Audit Trail

 

Failed Login Since 1hselect user_name,count(*) as logon_failure from dm_audittrail where event_name='dm_logon_failure' and time_stamp > (date(now)-(60*1/24/60)) group by user_name order by 2

 

This query displays the number of failed logons in the docbase per user since 60 minutes. The parameter 60 can be changed.

Purge Logon FailureEXECUTE purge_audit WITH delete_mode='PREDICATE', dql_predicate='dm_audittrail where event_name=''dm_logon_failure'''

 

This statement purges the audit trail queue by deleting all logon failure entries. Be aware that it can take a while depending on the number of entries you have.

Number Of Logon Failureselect count(*) as logon_failure from dm_audittrail where event_name='dm_logon_failure'

 

This query simply shows the number of logon failures in the queue.

4. Miscellaneous

 

IAPI Purge Cachesflush,c,ddcache,dm_type
flush,c,ddcache,dmi_type_info
flush,c,ddcache,dm_aggr_domain
flush,c,ddcache,dm_domain
flush,c,ddcache,dm_dd_info
flush,c,ddcache,dm_nls_dd_info
flush,c,ddcache,dm_foreign_key
flush,c,persistentcache
publish_dd,c
reinit,c

 

This query flushes caches, it can be used when trying to install ADTS dars and fails due to version mismatch.

Check ADTS Installer Versionjava -cp adtsWinSuiteSetup.jar DiShowVersion
Multi-installer Suite 6.7.2000.42
Installer-Version: 6.7.2000.42 build 1
Installer-Build-Date: 1/11/13 12:28 AM

 

Go to the ADTS installer directory and issue this query. It shows the version of the installer.

Encrypt dm_bof_registry Passwordjava com.documentum.fc.tools.RegistryPasswordUtils PASSWORD

 

This one encrypts the dm_bof_registry password in order to use it in dfc.properties. Not that the encryption process is different on xPlore and ADTS but you can use it on the content server and all DFC related programs. Replace the PASSWORD in the query by your clear password.

Oracle Database 12.1.0.2 Released (22 July 2014)

Oracle in Action - Thu, 2014-08-07 00:54

RSS content

Oracle Database 12.1.0.2 has been released on 22 July 2014.

Following are some of the new features and options:

• In-memory column store
• Attribute clustering on disk
• Oracle Flashback archive capabilities for pluggable databases
• Rapid Oracle home provisioning
• Centralized security key vault capabilities
• Storage and query capabilities for nonrelational data
• Advanced Index Compression
• Oracle Big Data SQL
• Oracle JSON Document Store
• Oracle REST Data Services
• Improvements to Oracle Multitenant
• Zone Maps
• Approximate Count Distinct
• Attribute Clustering
• Full Database Caching
…..

You can download it here.

Enjoy!!!



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracle Database 12.1.0.2 Released (22 July 2014)], All Right Reserved. 2014.

The post Oracle Database 12.1.0.2 Released (22 July 2014) appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

SQL Server 2014: DBCC CHECKDB and ReFS support

Yann Neuhaus - Wed, 2014-08-06 21:50

Have you noticed an interesting behavior with SQL Server 2014 when you are running a dbcc checkdb command? If not, you should have a look at this blog post. I discovered it completely by accident during some tests. If you watch carefully, you will see "weird" files appear while the dbcc checkdb command is running.

Below is a sample of the file you will find near the corresponded database file:

 

blog_15_-_dbcc_checkdb_refs_-_1

 

The file immediately disappears after completion of the dbcc checkdb command. So the question is, what is this file? The type of this file is MDF_MSSQL_DBCC10 and we can easily guess that it concerns a dbcc checkdb command, but I must admit I have never seen it in the past...

To be sure, I decided to trace the file activity of the SQL Server process while the dbcc checkdb command is running. You can see the part of the result that concerns our investigation below:

 

blog_15_-_dbcc_checkdb_refs_-2_

 

So we can now say that these files concern the dbcc checkdb activity and Kevin Farlee (SQL Program Manager at Microsoft) confirmed that this new behavior is quite normal.

Here is the explanation:

Prior to SQL Server 2014, the files for dbcc checkdb were created as alternate data streams of the regular database files. This is why we don’t see them by default (alternate data streams can be showed by using the dir command with /R parameter). But with the release of Windows Server 2012 and the new ReFS file system, the story has changed because it does not provide any more alternate data streams capabilities, which is why it was not compatible with SQL Server prior to the SQL Server 2014 version (see KB2681562).

This is a good news, because as you certainly know, ReFS provides some enhancements over NTFS. This is a more resilient file system and it has a better support for extremely large amounts of data. As a reminder, with large databases, you may face the operating system error numbers 1450 and 665 when you run dbcc checkdb command due to a NTFS limitation. These errors are reported in Microsoft KB2002606 and some fixes or workarounds are fortunately available.

Time to switch to ReFS file system with SQL Server 2014? Your thought will be appreciated!

Update on 2U: First full quarterly earnings and insight into model

Michael Feldstein - Wed, 2014-08-06 19:09

2U, the online service provider that went public in the spring, just released its financial report for the first full quarter of operations as a public company. The company beat estimates on total revenue and also lost less money than expected. Overall, it was a strong performance (see WSJ for basic summary or actual quarterly report for more details). The basics:

  • Revenue of $24.7 million for the quarter and $51.1 m for the past six months, which represents year-over-year increase of 32 and 35%;
  • EBITDA Losses of $7.1 m for the quarter and $10.9 m for the past six months, which represents year-over-year increase of -2% and 12%; and
  • Enrollment growth of 31 – 34% year-over-year.

Per the WSJ coverage of the conference call:

“I’m very pleased with our second quarter results, and that we have both the basis and the visibility to increase all of our guidance measures for 2014,” said Chip Paucek, 2U’s Chief Executive Officer and co-founder. “We’ve reached a turning point where, even with continued high investment for growth, our losses have stopped accelerating. At the midpoint of our new guidance range, we now expect our full year 2014 adjusted EBITDA loss to improve by 17% over 2013. Further, we’ve announced a schedule that meets our stated annual goal for new program launches through 2015.”

The company went public in late March at $14 / share and is still at that range ($14.21 before the quarterly earnings release – it might go up tomorrow). As one of only three ed tech companies to have gone public in the US over the past five years, 2U remains worth watching both for its own news and as a bellwether of the IPO market for ed tech.

Notes

The financials provide more insight into the world of Online Service Providers (OSP, aka Online Program Management, School-as-a-Service, Online Enablers, the market with no name). On the conference call 2U’s CEO Chip Paucek reminded analysts that they typically invest (money spent – revenue) $4 – $9 million per program in the early years and do not start to break even until years 3 – 4. 2U might be on the high side of these numbers given their focus on small class sizes at big-name schools, but this helps explain why the OSP market typically focuses on long-term contracts of 10+ years. Without such a long-term revenue-sharing contract, it would difficult for an OSP to ever break even.

As the market matures – with more competitors and with schools developing their own experiences in online programs, it will become more and more difficult for companies to maintain these commitments from schools. We have already seen signs over the past year of changes in institutional expectations.

2U, meanwhile, has positioned itself at the high-end of the market, relying on high tuitions and brand-name elite schools with small classes. The company for the most part will not even compete in a Request for Proposal process, avoiding direct competition with Embanet, Deltak, Academic Partnerships and others. Their prospects seem much stronger than the more competitive mainstream of OSP providers.

See the posts here at e-Literate for more background.

2U has changed one aspect of their strategy, as noted by Donna Murdoch on G+. At least through 2012 the company positioned itself as planning to work with one school per discipline (or vertical in their language). Pick one school for Masters of Social Work, one for MBA, etc. As described in Jan 2012:

“As we come into a new vertical, 2tor basically partners with one great school per vertical. We find one partner, one brand that is world-class. We partner with that brand over a long time period to create the market leader in that space for that discipline.”

2U now specifically plans for secondary schools in different verticals as can be seen in their press release put out today:

Programs Aug 2014

Note the duplication of Social Work between USC and Simmons, Nursing between Georgetown and Simmons, and Data Science between Berkeley and SMU. Note the new approach from page 20 of the quarterly report:

As described above, we have added, and we intend to continue to add, degree programs in a number of new academic disciplines each year, as well as to expand the delivery of existing degree programs to new clients.

View Into Model

Along with the first quarter release (which was not based on a full quarter of operations as a public company), 2U release some interesting videos that give a better view into their pedagogical approach and platform. In this video they describe their “Bi-directional Learning Tool (BLT)”:

This image is from a page on the 2U website showing their approach, with a view of the infamous Brady Bunch layout for live classes (synchronous).

Live Courses

We’ll keep watching 2U and share significant developments as we see them.

The post Update on 2U: First full quarterly earnings and insight into model appeared first on e-Literate.

Why Write-Through is still the default Flash Cache Mode on #Exadata X-4

The Oracle Instructor - Wed, 2014-08-06 12:41

The Flash Cache Mode still defaults to Write-Through on Exadata X-4 because most customers are better suited that way – not because Write-Back is buggy or unreliable. Chances are that Write-Back is not required, so we just save Flash capacity that way. So when you see this

CellCLI> list cell attributes flashcachemode
         WriteThrough

it is likely to your best :-)
Let me explain: Write-Through means that writing I/O coming from the database layer will first go to the spinning drives where it is mirrored according to the redundancy of the diskgroup where the file is placed that is written to. Afterwards, the cells may populate the Flash Cache if they think it will benefit subsequent reads, but there is no mirroring required. In case of hardware failure, the mirroring is already sufficiently done on the spinning drives, as the pictures shows:

Flash Cache Mode Write-Through

Flash Cache Mode WRITE-THROUGH

That changes with the Flash Cache Mode being Write-Back: Now writes go primarily to the Flashcards and popular objects may even never get aged out onto the spinning drives. At least that age out may happen significantly later, so the writes on flash must be mirrored now. The redundancy of the diskgroup where the object in question was placed on determines again the number of mirrored writes. The two pictures assume normal redundancy. In other words: Write-Back reduces the usable capacity of the Flashcache at least by half.

Flash Cache Mode Write-Back

Flash Cache Mode WRITE-BACK

Only databases with performance issues on behalf of writing I/O will benefit from Write-Back, the most likely symptom of which would be high numbers of the Free Buffer Waits wait-event. And Flash Logging is done with both Write-Through and Write-Back. So there is a good reason behind turning on the Write-Back Flash Cache Mode only on demand. I have explained this just very similar during my present Oracle University Exadata class in Frankfurt, by the way :-)


Tagged: exadata
Categories: DBA Blogs

OWB to ODI 12c Migration in action

Antonio Romero - Wed, 2014-08-06 11:00

The OWB to ODI 12c migration utility provides an easy to use on-ramp to Oracle's strategic data integration tool. The utility was designed and built by the same development group that produced OWB and ODI. 

Here's a screenshot from the recording below showing a project in OWB and what it looks like in ODI 12c;


There is a useful webcast that you can play and watch the migration utility in action. It takes an OWB implementation and uses the migration utility to move into ODI 12c.

http://oracleconferencing.webex.com/oracleconferencing/ldr.php?RCID=df8729e0c7628dde638847d9511f6b46

It's worth having a read of the following OTN article from Stewart Bryson which gives an overview of the capabilities and options OWB customers have moving forward.
http://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb-to-odi-2130001.html

Check it out and see what you think!

Alter Session Kill on Steroids

Pythian Group - Wed, 2014-08-06 10:27

Perhaps you have encountered something like this: A session that is consuming too many resources needs to be killed. You locate the session and use ALTER SYSTEM KILL SESSION ‘SID,SERIAL#’ to kill the session. As you continue to monitor the database you find that the status of the session in v$session is ‘KILLED’, but the session does not go away. You also notice that the SERIAL# is continually changing.

Now you find there is no OS process associated with the session, but the session continues as PMON is unable to finish cleanup for the session. Usually when this happens, the session will be holding a lock. When that happens, the only method to release the lock is to bounce the database. There are some bugs that may be responsible for this problem, such as this one described by Oracle Support:

Pmon Spins While Cleaning Dead Process (Doc ID 1130713.1)

This particular bug affects Oracle 10.2.0.1 – 11.1.0.7. I have personally seen this same behavior happen on many versions of the database from 7.0 on. To avoid these hanging sessions many DBA’s have adopted the habit of first killing the OS process with an OS utility, and if the session is still visible in v$session, issue the ALTER SYSTEM KILL command.

The OS command used on linux/unix is usually ‘kill -9′. On windows it is OraKill. This method usually avoids the problems encountered when killing a session that is holding a lock and processing DML.

I don’t know just what circumstances trigger this behavior, as I have never been able to reproduce it at will. When it does happen though, it is more than annoying as the only way to clear locks held by the recalcitrant session is to bounce the database.

Quite some time ago (at least as far back as Oracle 8i) Oracle introduced the new IMMEDIATE keyword to use with ALTER SYSTEM KILL SESSION. Using this keyword removes the need to use an OS command to kill a session – Oracle will do it for you! To test this I am using Oracle 10.2.0.4 on Oracle Linux 5.5. I have previously run these same tests in 11.2.0.3 with the same results. Had I access to an 8i or 9i database I would have run the tests there. To start with let’s see what happens when a session is killed without the immediate keyword.

Login to the session to be killed:

$ sqlplus scott/tiger@10gr2

Login as SYSDBA from another terminal and check for scott’s session:

SQL> l
  1  select
  2     s.username,
  3     s.sid,
  4     s.serial#,
  5     p.spid spid
  6  from v$session s, v$process p
  7  where s.username = 'SCOTT'
  8*    and p.addr = s.paddr
SQL> /

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SCOTT                                 133         35 22870

1 row selected.

All that has happened at this point is that Oracle has made an internal call that has disconnected Scott’s session. (tracing that operation is a different topic.) The process on the server has not been terminated. This can be seen by the following experiment:

Logon again as Scott.

In a SYSDBA session check for Scott’s:

 SQL> @scott

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SCOTT                                 146         81 23678

Now check for the shadow process associated with scott’s session on the server:


[root@ora10gR2 tmp]# ps -fp 23678
UID PID PPID C STIME TTY TIME CMD
oracle 23678 1 0 16:56 ? 00:00:00 oraclejs01 (LOCAL=NO)

Kill the session and check the status:

SQL> alter system kill session '146,81';

SQL> l
  1  select
  2     s.username,
  3     s.sid,
  4     s.serial#,
  5     p.spid spid
  6  from v$session s, v$process p
  7  where s.username = 'SCOTT'
  8*    and p.addr = s.paddr
SQL>/

no rows selected

Check again on the server for the process:

[root@ora10gR2 tmp]# ps -fp 23678
UID PID PPID C STIME TTY TIME CMD
oracle 23678 1 0 16:56 ? 00:00:00 oraclejs01 (LOCAL=NO)

Interesting, isn’t it? We know the process is still alive on the server, but the session information is no longer associated with the process. This happens because Oracle has disconnected the session, which allows the process to continue until the sqlplus session is terminated. The session information is still available in v$session, but is no longer associated with a server process:

select
  2     s.username,
  3     s.status,
  4     s.sid,
  5     s.serial#
  6  from v$session s
  7* where s.username = 'SCOTT'
SQL>/

USERNAME                       STATUS          SID    SERIAL#
------------------------------ -------- ---------- ----------
SCOTT                          KILLED          146         81

1 row selected.

 1* select pid,spid from v$process where pid = 146
SQL>/

no rows selected

When exiting the Scott session, I can see that the session was killed:

SQL> exit
ERROR:
ORA-00028: your session has been killed

Let’s perform the experiment again, but this time use the IMMEDIATE keyword.

Logon as scott:

> sqlplus scott/tiger@10gr2

SQL*Plus: Release 11.2.0.3.0 Production on Tue Aug 5 17:18:53 2014

Logon as SYSDBA and check for the scott session;

SQL> @scott

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SCOTT                                 146         83 23939

1 row selected.

Before killing scott’s session:

  • get my OS PID
  • enable 10046 trace

The OS PID will be used for strace on the SYSDBA session shadow process on the server.
The 10046 trace is so we can see what is happening in the strace output.

SQL> l
1 select
2 s.username,
3 s.sid,
4 s.serial#,
5 p.spid spid
6 from v$session s, v$process p
7 where s.username is not null
8 and p.addr = s.paddr
9 and userenv('SESSIONID') = s.audsid
10* order by username, sid
SQL>/

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SYS                                   145         65 23947

1 row selected.

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

Now ssh to the db server , check for Scott session shadow process and start strace:

[root@ora10gR2 tmp]# strace -o 23947.strace -p 23947
^Z
[1]+ Stopped strace -o 23947.strace -p 23947
[root@ora10gR2 tmp]# bg
[1]+ strace -o 23947.strace -p 23947 &

[root@ora10gR2 tmp]# ps -p 23939
PID TTY TIME CMD
23939 ? 00:00:00 oracle

Now kill Scott’s session and exit the SYSDBA session:

SQL> alter system kill session '146,83' immediate;

System altered.

The strace command will now have exited on the server.

First check again for Scott’s session:

[root@ora10gR2 tmp]# ps -p 23939
PID TTY TIME CMD
[root@ora10gR2 tmp]#

So the Scott shadow process has terminated.

As the 10046 trace was enabled, the output to the oracle trace file will appear in the strace file, which allows searching for ‘alter system kill’ in the strace file.

From the strace file:

write(5, "alter system kill session '146,8"..., 44) = 44

Now searching for the PID of scott’s session 23939:

read(10, "23939 (oracle) S 1 23939 23939 0"..., 999) = 228
close(10) = 0
open("/proc/23939/stat", O_RDONLY) = 10
read(10, "23939 (oracle) S 1 23939 23939 0"..., 999) = 228
close(10) = 0
kill(23939, SIGKILL) = 0
kill(23939, SIGCONT) = 0
open("/proc/23939/stat", O_RDONLY) = 10
read(10, "23939 (oracle) Z 1 23939 23939 0"..., 999) = 178
close(10) = 0

From the previous text I can see that Oracle opened the status file for PID 23939.
Why it did so twice I am not sure.

What happens after that is the interesting part.

kill(23939, SIGKILL) = 0

That line means that the SIGKILL signal was successfully sent to Scott’s shadow process.

What does that mean? Run kill -l to get a list of signals:

kill -l
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
16) SIGSTKFLT   17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU     25) SIGXFSZ
26) SIGVTALRM   27) SIGPROF     28) SIGWINCH    29) SIGIO       30) SIGPWR
31) SIGSYS      34) SIGRTMIN    35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3
38) SIGRTMIN+4  39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7
58) SIGRTMAX-6  59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX

Notice that signal 9 (kill -9) is SIGKILL.

So when killing a session with ALTER SYSTEM KILL SESSION ‘PID,SERIAL#’ IMMEDIATE Oracle is actually doing the kill -9 for you, and has been for many years now.

Though not shown here, this same test was run when the session was killed without using the IMMEDIATE keyword, and there were no attempts to kill the session. This was inferred as well by the fact the the process was still running on the server up until the time the Scott sqlplus session was exited.

Categories: DBA Blogs

Disaster Recovery Simulation test - performed 30 databases failover

Syed Jaffar - Wed, 2014-08-06 05:55
Successfully switched (fail-over) the role of over 30 physical standby databases  this morning as part of the Disaster Recovery (DR) simulation test. Fortunately, there were no technical glitches and hassles during the course of testing as  anticipated. It was indeed a great test and very successful one too.

The next  big challenge to the team would be reconstructing and making in sync those 30 physical standby databases whose range from 100GB to 5TB size.

Anyways, my team loving the challenges and true enjoying every moment.





SLOB Data Loading Case Studies – Part I. A Simple Concurrent + Parallel Example.

Kevin Closson - Wed, 2014-08-06 02:21
Introduction

This is Part I in a short series of posts dedicated to loading SLOB data.  The SLOB loader is called setup.sh and it is, by default a concurrent, data loader. The SLOB configuration file parameter controlling the number of concurrent data loading threads is called LOAD_PARALLEL_DEGREE. In retrospect I should have named the parameter LOAD_CONCURRENT_DEGREE because unless Oracle Parallel Query is enabled there is no parallelism in the data loading procedure. But if LOAD_PARALLEL_DEGREE is assigned a value greater than 1 there is concurrent data loading.

Occasionally I hear of users having trouble with combining Oracle Parallel Query with the concurrent SLOB loader. It is pretty easy to overburden a system when doing something like concurrent, parallel data loading–in the absence of tools like Database Resource Management I suppose. To that end,  this series will show some examples of what to expect when performing SLOB data loading with various init.ora settings and combinations of parallel and concurrent data loading.

In this first example I’ll show an example of loading with LOAD_PARALLEL_DEGREE set to 8. The scale is 524288 SLOB rows which maps to 524,288 data blocks because SLOB forces a single row per block. Please note, the only slob.conf parameters that affect data loading are LOAD_PARALLEL_DEGREE and SCALE. The following is a screen shot of the slob.conf file for this example:

SLOB-data-load-3

The next screen shot shows the very simple init.ora settings I used during the data loading test. This very basic initialization file results in default Oracle Parallel Query, therefore  this example is a concurrent + parallel data load.

SLOB-data-load-6

The next screen shot shows that I directed setup.sh to load 64 SLOB schemas into a tablespace called IOPS. Since SCALE is 524,288 this example loaded roughly 256GB (8192 * 524288 * 64) of data into the IOPS tablespace.

SLOB-data-load-1

As reported by setup.sh the data loading completed in 1,539 seconds or a load rate of roughly 600GB/h. This loading rate by no means shows any intrinsic limit in the loader. In future posts in this series I’ll cover some tuning tips to improve data loading.  The following screen shot shows the storage I/O rates in kilobytes during a portion of the load procedure. Please note, this is a 2s16c32t 115w Sandy Bridge Xeon based server. Any storage capable of I/O bursts of roughly 1.7GB/s (i.e., 2 active 8GFC Fibre Channel paths to any enterprise class array) can demonstrate this sort of SLOB data loading throughput.

SLOB-data-load-2

 

After setup.sh completes it is good to count how many loader threads were able to successfully load the specified number of rows. As the example shows I simply grep for the value of slob.conf->SCALE from cr_tab_and_load.out. Remember, SLOB in its current form, loads a zeroth schema so the return from such a word count (-l) should be one greater than the number of schemas setup.sh was directed to load.

SLOB-data-load-4

The next screen shot shows the required execution of the procedure.sql script. This procedure must be executed after any execution of setup.sh.

SLOB-data-load-7

Finally, one can use the SLOB/misc/tsf.sql script to report the size of the tablespace used by setup.sh. As the following screenshot shows the  IOPS tablespace ended up with a  little over 270GB which can be accounted for by the size of the tables based on slob.conf, the number of schemas and a little overhead for indexes.

 

SLOB-data-load-5

Summary

This installment in the series has shown expected screen output from a simple example of data loading. This example used default Oracle Parallel Query settings, a very simple init.ora and a concurrent loading degree of 8 (slob.conf->LOAD_PARALLEL_DEGREE) to load data at a rate of roughly 600GB/h.

 

 

 


Filed under: oracle

Passed OCP 12c test

Bobby Durrett's DBA Blog - Tue, 2014-08-05 15:27

Well, I finally passed the Oracle Certified Professional (OCP) 12c upgrade exam, 1Z0-060.  I got 86% right on the new features section and 78% correct on the general DBA section.  So, that means I missed roughly 7 questions on each section which was more than I expected because I felt good about all of my answers.

I’m happy with the three resources I used which are:

  1. Self Test software – $99
  2. Matthew Morris book – $10
  3. Sam Alapati book – $44

I spent a lot of time in the 12c manuals.  Be careful, though, because the 12.1.0.2 manuals just came out and some things are different in 12.1.0.2 from the test, because the test designers based the test on 12.1.0.1.

I built two Linux virtual machines for test preparation.  On one I installed the 12.1.0.1 database software and created a CDB and a non-CDB for testing.  On the other I installed the grid software and database software and created a CDB on ASM disks.  I spent many hours testing 12c features on these two environments and three databases.

I learned a ton both about new 12c features and features that exist in earlier versions, but it was a long and tedious process and I’m glad to finally be done.

- Bobby

Categories: DBA Blogs