Calculate the space used by a single object This script will help you calculate the size of a single object : [crayon-53f777ab4932e354412601/] Calculate the space used by a whole schema If you want the space used by a whole schema, then here is a variation of the first query : [crayon-53f777ab4933f594248112/]
I was using world GDP data (SQL script is available together with sample application) and displayed it using ADF Thematic Map. World country borders are hidden on purpose, borders are visible by default:
While zooming, marker points are growing - very useful feature:
Data for thematic map is fetched using SQL based VO. I'm calculating total GDP and taking percentage from total for the country, this allows to scale marker points better:
ADF Thematic Map is configured to support zooming for markers:
Countries area layer is set to be hidden, although data is still gets attached to the countries:
Marker is configured with dynamic scaling, this is how each country gets appropriate marker size, based on its GDP value:
Marker colour property is set to be dynamically calculated in marker Attribute Groups section:
Kevin is the IT Director for a top-quality Less-than-Truckload carrier servicing eight Midwestern states.
A recent industry survey showed the company’s website was falling short of customer expectations in the following three areas:
- Ease of use
- Providing useful information, and
- Utilizing effective technology and tracking systems
I have a small blacklist of companies I won’t talk with because of their particularly unethical past behavior. Actian is one such; they evidently made stuff up about me that Josh Berkus gullibly posted for them, and I don’t want to have conversations that could be dishonestly used against me.
That said, Peter Boncz isn’t exactly an Actian employee. Rather, he’s the professor who supervised Marcin Zukowski’s PhD thesis that became Vectorwise, and I chatted with Peter by Skype while he was at home in Amsterdam. I believe his assurances that no Actian personnel sat in on the call.
In other news, Peter is currently working on and optimistic about HyPer. But we literally spent less tana minute talking about that
Before I get to the substance, there’s been a lot of renaming at Actian. To quote Andrew Brust,
… the ParAccel, Pervasive and Vectorwise technologies are being unified under the Actian Analytics Platform brand. Specifically, the ParAccel technology … is being re-branded Actian Matrix; Pervasive’s technologies are rechristened Actian DataFlow and Actian DataConnect; and Vectorwise becomes Actian Vector.
Actian … is now “one company, with one voice and one platform” according to its John Santaferraro
The bolded part of the latter quote is untrue — at least in the ordinary sense of the word “one” — but the rest can presumably be taken as company gospel.
All this is by way of preamble to saying that Peter reached out to me about Actian’s new Vector Hadoop Edition when he blogged about it last June, and we finally talked this week. Highlights include:
- Vectorwise, while being proudly multi-core, was previously single-server. The new Vector Hadoop Edition is the first version with node parallelism.
- Actian’s Vector Hadoop edition uses HDFS (Hadoop Distributed File System) and YARN to manage an Actian-proprietary file format. There is currently no interoperability whereby Hadoop jobs can read these files. However …
- … Actian’s Vector Hadoop edition relies on Hadoop for cluster management, workload management and so on.
- Peter thinks there are two paying customers, both too recent to be in production, who between then paid what I’d call a remarkable amount of money.*
- Roadmap futures* include:
- Being able to update and indeed trickle-update data. Peter is very proud of Vectorwise’s Positional Delta Tree updating.
- Some elasticity they’re proud of, both in terms of nodes (generally limited to the replication factor of 3) and cores (not so limited).
- Better interoperability with Hadoop.
Actian actually bundles Vector Hadoop Edition with DataFlow — the old Pervasive DataRush — into what it calls “Actian Analytics Platform – Hadoop SQL Edition”. DataFlow/DataRush has been working over Hadoop since the latter part of 2012, based on a visit with my then clients at Pervasive that December.
*Peter gave me details about revenue, pipeline, roadmap timetables etc. that I’m redacting in case Actian wouldn’t like them shared. I should say that the timetable for some — not all — of the roadmap items was quite near-term; however, pay no attention to any phrasing in Peter’s blog post that suggests the roadmap features are already shipping.
The Actian Vector Hadoop Edition optimizer and query-planning story goes something like this:
- Vectorwise started with the open-source Ingres optimizer. After a query is optimized, it is rewritten to reflect Vectorwise’s columnar architecture. Peter notes that these rewrites rarely change operator ordering; they just add column-specific optimizations, whatever that means.
- Now there are rewrites for parallelism as well.
- These rewrites all seem to be heuristic/rule-based rather than cost-based.
- Once Vectorwise became part of the Ingres company (later renamed to Actian), they had help from Ingres engineers, who helped them modify the base optimizer so that it wasn’t just the “stock” Ingres one.
As with most modern MPP (Massively Parallel Processing) analytic RDBMS, there doesn’t seem to be any concept of a head-node to which intermediate results need to be shipped. This is good, because head nodes in early MPP analytic RDBMS were dreadful bottlenecks.
Peter and I also talked a bit about SQL-oriented HDFS file formats, such as Parquet and ORC. He doesn’t like their lack of support for columnar compression. Further, in Parquet there seems to be a requirement to read the whole file, to an extent that interferes with Vectorwise’s form of data skipping, which it calls “min-max indexing”.
Frankly, I don’t think the architectural choice “uses Hadoop for workload management and administration” provides a lot of customer benefit in this case. Given that, I don’t know that the world needs another immature MPP analytic RDBMS. I also note with concern that Actian has two different MPP analytic RDBMS products. Still, Vectorwise and indeed all the stuff that comes out Martin Kersten and Peter’s group in Amsterdam has always been interesting technology. So the Actian Vector Hadoop Edition might be worth taking a look at before you redirect your attention to products with more convincing track records and futures.
We share our skills to maximize your revenue!
In this blog post I want to sharing some useful DQL and IAPI queries that I am using all the time. They are more dedicated to Documentum support/debug and grouped by components. In order to use them, I recommend Qdman: it is the best tool I know to use DQL and IAPI scripts.1. xPlore
It regroups both the search engine and the index management.
This query will perform a search just as if you put the 'manual.doc' in the search field of DA. It can be used to check if the dsearch is working fine and if the indexing has been performed correctly. If not, it will return 0 results for a document that you know does exist in the docbase.
This query will return the number of items waiting to be indexed since 15 minutes. The parameter can be changed to 60 minutes or whatever, you just have to change the '15' in bold in the previous query.
This one is similar to the previous, but it returns the number of 'in progress' indexing requests. Note that the parameter can still ne changed.
This query lists the number of indexes by state:
- blank -> awaiting indexing
- aquired -> in progress
Sometimes I noticed there are indexing requests on deleted documents. In fact, it can happen if someone saved a document, then deleted it right after. The indexing request remains in the queue for life. Thus, you may want to delete it. First, check if the file is deleted by running the IPAI: dump,c,09xxxxxxxxxxxxxx. If an error occurs telling the document doesn't exist anymore, you can delete it.
This query returns your configured index agent information. It is useful for the IAPI command returning the index agent status (see below).
This script returns the Index Agent Status (Running, Stopped, and so on). Note that you have to replace the Indey Agent information in the first line by your Index Agent. You can get these information thanks to the DQL query above.
This one simply puts an indexing request in the queue. You have to replace 09xxxxxxxxxxxxxxx by the r_object_id of the document you want to queue.
For this one you have to go to the xPlore server, it shows the configured dsearch port.
The following queries concern the rendition component. It regroups the rendition queue check and the way to manually ask a rendition through IAPI.
Manually Queue Rendering Requestqueue,c,09xxxxxxxxxxxxxx,dm_autorender_win31,rendition,0,F,,rendition_req_ps_pdf
As the indexing request, this one puts a PDF rendition request in the queue. It can be useful when scripting or in DFC programs.
This returns the rendition requests by state.
This query returns all documents present in the rendering queue. That means all document waiting for rendition.
This Query returns the failed renditions. Be aware of the date_sent field, because this queue is not cleared. This means that if a rendition request failed 3 times in a row and succeed the last time, there will be 3 rows in the failed queue, but the rendition did succeed. So you should verify that the rendition did succeed and if so, you can delete the row form of the failed queue.
This query checks if a rendition is present for the given DOCUMENT name. If the pdf rendition exists, it returns its r_object_id. If no rendition is present for the given document, it returns nothing.
Failed Login Since 1hselect user_name,count(*) as logon_failure from dm_audittrail where event_name='dm_logon_failure' and time_stamp > (date(now)-(60*1/24/60)) group by user_name order by 2
This query displays the number of failed logons in the docbase per user since 60 minutes. The parameter 60 can be changed.
This statement purges the audit trail queue by deleting all logon failure entries. Be aware that it can take a while depending on the number of entries you have.
This query simply shows the number of logon failures in the queue.
IAPI Purge Cachesflush,c,ddcache,dm_type
This query flushes caches, it can be used when trying to install ADTS dars and fails due to version mismatch.
Multi-installer Suite 6.7.2000.42
Installer-Version: 6.7.2000.42 build 1
Installer-Build-Date: 1/11/13 12:28 AM
Go to the ADTS installer directory and issue this query. It shows the version of the installer.
This one encrypts the dm_bof_registry password in order to use it in dfc.properties. Not that the encryption process is different on xPlore and ADTS but you can use it on the content server and all DFC related programs. Replace the PASSWORD in the query by your clear password.
Oracle Database 184.108.40.206 has been released on 22 July 2014.
Following are some of the new features and options:
• In-memory column store
• Attribute clustering on disk
• Oracle Flashback archive capabilities for pluggable databases
• Rapid Oracle home provisioning
• Centralized security key vault capabilities
• Storage and query capabilities for nonrelational data
• Advanced Index Compression
• Oracle Big Data SQL
• Oracle JSON Document Store
• Oracle REST Data Services
• Improvements to Oracle Multitenant
• Zone Maps
• Approximate Count Distinct
• Attribute Clustering
• Full Database Caching
You can download it here.
Comments: 0 (Zero), Be the first to leave a reply!You might be interested in this:
- 11g R2 RAC: REBOOT-LESS FENCING WITH MISSING DISK HEARTBEAT
- 12c : Transport Database Over Network
- ORA-65023: active transaction exists in container CDB$ROOT
- SQL PROFILE DEMYSTIFIED : PART - II
- CONSISTENT READS IN ORACLE : PART- II
Have you noticed an interesting behavior with SQL Server 2014 when you are running a dbcc checkdb command? If not, you should have a look at this blog post. I discovered it completely by accident during some tests. If you watch carefully, you will see "weird" files appear while the dbcc checkdb command is running.
Below is a sample of the file you will find near the corresponded database file:
The file immediately disappears after completion of the dbcc checkdb command. So the question is, what is this file? The type of this file is MDF_MSSQL_DBCC10 and we can easily guess that it concerns a dbcc checkdb command, but I must admit I have never seen it in the past...
To be sure, I decided to trace the file activity of the SQL Server process while the dbcc checkdb command is running. You can see the part of the result that concerns our investigation below:
So we can now say that these files concern the dbcc checkdb activity and Kevin Farlee (SQL Program Manager at Microsoft) confirmed that this new behavior is quite normal.
Here is the explanation:
Prior to SQL Server 2014, the files for dbcc checkdb were created as alternate data streams of the regular database files. This is why we don’t see them by default (alternate data streams can be showed by using the dir command with /R parameter). But with the release of Windows Server 2012 and the new ReFS file system, the story has changed because it does not provide any more alternate data streams capabilities, which is why it was not compatible with SQL Server prior to the SQL Server 2014 version (see KB2681562).
This is a good news, because as you certainly know, ReFS provides some enhancements over NTFS. This is a more resilient file system and it has a better support for extremely large amounts of data. As a reminder, with large databases, you may face the operating system error numbers 1450 and 665 when you run dbcc checkdb command due to a NTFS limitation. These errors are reported in Microsoft KB2002606 and some fixes or workarounds are fortunately available.
Time to switch to ReFS file system with SQL Server 2014? Your thought will be appreciated!
2U, the online service provider that went public in the spring, just released its financial report for the first full quarter of operations as a public company. The company beat estimates on total revenue and also lost less money than expected. Overall, it was a strong performance (see WSJ for basic summary or actual quarterly report for more details). The basics:
- Revenue of $24.7 million for the quarter and $51.1 m for the past six months, which represents year-over-year increase of 32 and 35%;
- EBITDA Losses of $7.1 m for the quarter and $10.9 m for the past six months, which represents year-over-year increase of -2% and 12%; and
- Enrollment growth of 31 – 34% year-over-year.
Per the WSJ coverage of the conference call:
“I’m very pleased with our second quarter results, and that we have both the basis and the visibility to increase all of our guidance measures for 2014,” said Chip Paucek, 2U’s Chief Executive Officer and co-founder. “We’ve reached a turning point where, even with continued high investment for growth, our losses have stopped accelerating. At the midpoint of our new guidance range, we now expect our full year 2014 adjusted EBITDA loss to improve by 17% over 2013. Further, we’ve announced a schedule that meets our stated annual goal for new program launches through 2015.”
The company went public in late March at $14 / share and is still at that range ($14.21 before the quarterly earnings release – it might go up tomorrow). As one of only three ed tech companies to have gone public in the US over the past five years, 2U remains worth watching both for its own news and as a bellwether of the IPO market for ed tech.Notes
The financials provide more insight into the world of Online Service Providers (OSP, aka Online Program Management, School-as-a-Service, Online Enablers, the market with no name). On the conference call 2U’s CEO Chip Paucek reminded analysts that they typically invest (money spent – revenue) $4 – $9 million per program in the early years and do not start to break even until years 3 – 4. 2U might be on the high side of these numbers given their focus on small class sizes at big-name schools, but this helps explain why the OSP market typically focuses on long-term contracts of 10+ years. Without such a long-term revenue-sharing contract, it would difficult for an OSP to ever break even.
As the market matures – with more competitors and with schools developing their own experiences in online programs, it will become more and more difficult for companies to maintain these commitments from schools. We have already seen signs over the past year of changes in institutional expectations.
2U, meanwhile, has positioned itself at the high-end of the market, relying on high tuitions and brand-name elite schools with small classes. The company for the most part will not even compete in a Request for Proposal process, avoiding direct competition with Embanet, Deltak, Academic Partnerships and others. Their prospects seem much stronger than the more competitive mainstream of OSP providers.
See the posts here at e-Literate for more background.
2U has changed one aspect of their strategy, as noted by Donna Murdoch on G+. At least through 2012 the company positioned itself as planning to work with one school per discipline (or vertical in their language). Pick one school for Masters of Social Work, one for MBA, etc. As described in Jan 2012:
“As we come into a new vertical, 2tor basically partners with one great school per vertical. We find one partner, one brand that is world-class. We partner with that brand over a long time period to create the market leader in that space for that discipline.”
2U now specifically plans for secondary schools in different verticals as can be seen in their press release put out today:
Note the duplication of Social Work between USC and Simmons, Nursing between Georgetown and Simmons, and Data Science between Berkeley and SMU. Note the new approach from page 20 of the quarterly report:
As described above, we have added, and we intend to continue to add, degree programs in a number of new academic disciplines each year, as well as to expand the delivery of existing degree programs to new clients.View Into Model
Along with the first quarter release (which was not based on a full quarter of operations as a public company), 2U release some interesting videos that give a better view into their pedagogical approach and platform. In this video they describe their “Bi-directional Learning Tool (BLT)”:
This image is from a page on the 2U website showing their approach, with a view of the infamous Brady Bunch layout for live classes (synchronous).
We’ll keep watching 2U and share significant developments as we see them.
The post Update on 2U: First full quarterly earnings and insight into model appeared first on e-Literate.
The Flash Cache Mode still defaults to Write-Through on Exadata X-4 because most customers are better suited that way – not because Write-Back is buggy or unreliable. Chances are that Write-Back is not required, so we just save Flash capacity that way. So when you see this
CellCLI> list cell attributes flashcachemode WriteThrough
it is likely to your best :-)
Let me explain: Write-Through means that writing I/O coming from the database layer will first go to the spinning drives where it is mirrored according to the redundancy of the diskgroup where the file is placed that is written to. Afterwards, the cells may populate the Flash Cache if they think it will benefit subsequent reads, but there is no mirroring required. In case of hardware failure, the mirroring is already sufficiently done on the spinning drives, as the pictures shows:
That changes with the Flash Cache Mode being Write-Back: Now writes go primarily to the Flashcards and popular objects may even never get aged out onto the spinning drives. At least that age out may happen significantly later, so the writes on flash must be mirrored now. The redundancy of the diskgroup where the object in question was placed on determines again the number of mirrored writes. The two pictures assume normal redundancy. In other words: Write-Back reduces the usable capacity of the Flashcache at least by half.
Only databases with performance issues on behalf of writing I/O will benefit from Write-Back, the most likely symptom of which would be high numbers of the Free Buffer Waits wait-event. And Flash Logging is done with both Write-Through and Write-Back. So there is a good reason behind turning on the Write-Back Flash Cache Mode only on demand. I have explained this just very similar during my present Oracle University Exadata class in Frankfurt, by the way :-)
The OWB to ODI 12c migration utility provides an easy to use on-ramp to Oracle's strategic data integration tool. The utility was designed and built by the same development group that produced OWB and ODI.Here's a screenshot from the recording below showing a project in OWB and what it looks like in ODI 12c;
There is a useful webcast that you can play and watch the migration utility in action. It takes an OWB implementation and uses the migration utility to move into ODI 12c.
It's worth having a read of the following OTN article from Stewart Bryson which gives an overview of the capabilities and options OWB customers have moving forward.
Check it out and see what you think!
Perhaps you have encountered something like this: A session that is consuming too many resources needs to be killed. You locate the session and use ALTER SYSTEM KILL SESSION ‘SID,SERIAL#’ to kill the session. As you continue to monitor the database you find that the status of the session in v$session is ‘KILLED’, but the session does not go away. You also notice that the SERIAL# is continually changing.
Now you find there is no OS process associated with the session, but the session continues as PMON is unable to finish cleanup for the session. Usually when this happens, the session will be holding a lock. When that happens, the only method to release the lock is to bounce the database. There are some bugs that may be responsible for this problem, such as this one described by Oracle Support:
Pmon Spins While Cleaning Dead Process (Doc ID 1130713.1)
This particular bug affects Oracle 10.2.0.1 – 220.127.116.11. I have personally seen this same behavior happen on many versions of the database from 7.0 on. To avoid these hanging sessions many DBA’s have adopted the habit of first killing the OS process with an OS utility, and if the session is still visible in v$session, issue the ALTER SYSTEM KILL command.
The OS command used on linux/unix is usually ‘kill -9′. On windows it is OraKill. This method usually avoids the problems encountered when killing a session that is holding a lock and processing DML.
I don’t know just what circumstances trigger this behavior, as I have never been able to reproduce it at will. When it does happen though, it is more than annoying as the only way to clear locks held by the recalcitrant session is to bounce the database.
Quite some time ago (at least as far back as Oracle 8i) Oracle introduced the new IMMEDIATE keyword to use with ALTER SYSTEM KILL SESSION. Using this keyword removes the need to use an OS command to kill a session – Oracle will do it for you! To test this I am using Oracle 10.2.0.4 on Oracle Linux 5.5. I have previously run these same tests in 18.104.22.168 with the same results. Had I access to an 8i or 9i database I would have run the tests there. To start with let’s see what happens when a session is killed without the immediate keyword.
Login to the session to be killed:
$ sqlplus scott/tiger@10gr2
Login as SYSDBA from another terminal and check for scott’s session:
SQL> l 1 select 2 s.username, 3 s.sid, 4 s.serial#, 5 p.spid spid 6 from v$session s, v$process p 7 where s.username = 'SCOTT' 8* and p.addr = s.paddr SQL> / USERNAME SID SERIAL# SPID ------------------------------ ---------- ---------- ------------ SCOTT 133 35 22870 1 row selected.
All that has happened at this point is that Oracle has made an internal call that has disconnected Scott’s session. (tracing that operation is a different topic.) The process on the server has not been terminated. This can be seen by the following experiment:
Logon again as Scott.
In a SYSDBA session check for Scott’s:
SQL> @scott USERNAME SID SERIAL# SPID ------------------------------ ---------- ---------- ------------ SCOTT 146 81 23678
Now check for the shadow process associated with scott’s session on the server:
[root@ora10gR2 tmp]# ps -fp 23678
UID PID PPID C STIME TTY TIME CMD
oracle 23678 1 0 16:56 ? 00:00:00 oraclejs01 (LOCAL=NO)
Kill the session and check the status:
SQL> alter system kill session '146,81'; SQL> l 1 select 2 s.username, 3 s.sid, 4 s.serial#, 5 p.spid spid 6 from v$session s, v$process p 7 where s.username = 'SCOTT' 8* and p.addr = s.paddr SQL>/ no rows selected
Check again on the server for the process:
[root@ora10gR2 tmp]# ps -fp 23678
UID PID PPID C STIME TTY TIME CMD
oracle 23678 1 0 16:56 ? 00:00:00 oraclejs01 (LOCAL=NO)
Interesting, isn’t it? We know the process is still alive on the server, but the session information is no longer associated with the process. This happens because Oracle has disconnected the session, which allows the process to continue until the sqlplus session is terminated. The session information is still available in v$session, but is no longer associated with a server process:
select 2 s.username, 3 s.status, 4 s.sid, 5 s.serial# 6 from v$session s 7* where s.username = 'SCOTT' SQL>/ USERNAME STATUS SID SERIAL# ------------------------------ -------- ---------- ---------- SCOTT KILLED 146 81 1 row selected. 1* select pid,spid from v$process where pid = 146 SQL>/ no rows selected
When exiting the Scott session, I can see that the session was killed:
SQL> exit ERROR: ORA-00028: your session has been killed
Let’s perform the experiment again, but this time use the IMMEDIATE keyword.
Logon as scott:
> sqlplus scott/tiger@10gr2
SQL*Plus: Release 22.214.171.124.0 Production on Tue Aug 5 17:18:53 2014
Logon as SYSDBA and check for the scott session;
SQL> @scott USERNAME SID SERIAL# SPID ------------------------------ ---------- ---------- ------------ SCOTT 146 83 23939 1 row selected.
Before killing scott’s session:
- get my OS PID
- enable 10046 trace
The OS PID will be used for strace on the SYSDBA session shadow process on the server.
The 10046 trace is so we can see what is happening in the strace output.
SQL> l 1 select 2 s.username, 3 s.sid, 4 s.serial#, 5 p.spid spid 6 from v$session s, v$process p 7 where s.username is not null 8 and p.addr = s.paddr 9 and userenv('SESSIONID') = s.audsid 10* order by username, sid SQL>/ USERNAME SID SERIAL# SPID ------------------------------ ---------- ---------- ------------ SYS 145 65 23947 1 row selected. SQL> alter session set events '10046 trace name context forever, level 12'; Session altered.
Now ssh to the db server , check for Scott session shadow process and start strace:
[root@ora10gR2 tmp]# strace -o 23947.strace -p 23947
+ Stopped strace -o 23947.strace -p 23947
[root@ora10gR2 tmp]# bg
+ strace -o 23947.strace -p 23947 &
[root@ora10gR2 tmp]# ps -p 23939
PID TTY TIME CMD
23939 ? 00:00:00 oracle
Now kill Scott’s session and exit the SYSDBA session:
SQL> alter system kill session '146,83' immediate; System altered.
The strace command will now have exited on the server.
First check again for Scott’s session:
[root@ora10gR2 tmp]# ps -p 23939
PID TTY TIME CMD
So the Scott shadow process has terminated.
As the 10046 trace was enabled, the output to the oracle trace file will appear in the strace file, which allows searching for ‘alter system kill’ in the strace file.
From the strace file:
write(5, "alter system kill session '146,8"..., 44) = 44
Now searching for the PID of scott’s session 23939:
read(10, "23939 (oracle) S 1 23939 23939 0"..., 999) = 228 close(10) = 0 open("/proc/23939/stat", O_RDONLY) = 10 read(10, "23939 (oracle) S 1 23939 23939 0"..., 999) = 228 close(10) = 0 kill(23939, SIGKILL) = 0 kill(23939, SIGCONT) = 0 open("/proc/23939/stat", O_RDONLY) = 10 read(10, "23939 (oracle) Z 1 23939 23939 0"..., 999) = 178 close(10) = 0
From the previous text I can see that Oracle opened the status file for PID 23939.
Why it did so twice I am not sure.
What happens after that is the interesting part.
kill(23939, SIGKILL) = 0
That line means that the SIGKILL signal was successfully sent to Scott’s shadow process.
What does that mean? Run kill -l to get a list of signals:
kill -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX
Notice that signal 9 (kill -9) is SIGKILL.
So when killing a session with ALTER SYSTEM KILL SESSION ‘PID,SERIAL#’ IMMEDIATE Oracle is actually doing the kill -9 for you, and has been for many years now.
Though not shown here, this same test was run when the session was killed without using the IMMEDIATE keyword, and there were no attempts to kill the session. This was inferred as well by the fact the the process was still running on the server up until the time the Scott sqlplus session was exited.
The next big challenge to the team would be reconstructing and making in sync those 30 physical standby databases whose range from 100GB to 5TB size.
Anyways, my team loving the challenges and true enjoying every moment.
This is Part I in a short series of posts dedicated to loading SLOB data. The SLOB loader is called setup.sh and it is, by default a concurrent, data loader. The SLOB configuration file parameter controlling the number of concurrent data loading threads is called LOAD_PARALLEL_DEGREE. In retrospect I should have named the parameter LOAD_CONCURRENT_DEGREE because unless Oracle Parallel Query is enabled there is no parallelism in the data loading procedure. But if LOAD_PARALLEL_DEGREE is assigned a value greater than 1 there is concurrent data loading.
Occasionally I hear of users having trouble with combining Oracle Parallel Query with the concurrent SLOB loader. It is pretty easy to overburden a system when doing something like concurrent, parallel data loading–in the absence of tools like Database Resource Management I suppose. To that end, this series will show some examples of what to expect when performing SLOB data loading with various init.ora settings and combinations of parallel and concurrent data loading.
In this first example I’ll show an example of loading with LOAD_PARALLEL_DEGREE set to 8. The scale is 524288 SLOB rows which maps to 524,288 data blocks because SLOB forces a single row per block. Please note, the only slob.conf parameters that affect data loading are LOAD_PARALLEL_DEGREE and SCALE. The following is a screen shot of the slob.conf file for this example:
The next screen shot shows the very simple init.ora settings I used during the data loading test. This very basic initialization file results in default Oracle Parallel Query, therefore this example is a concurrent + parallel data load.
The next screen shot shows that I directed setup.sh to load 64 SLOB schemas into a tablespace called IOPS. Since SCALE is 524,288 this example loaded roughly 256GB (8192 * 524288 * 64) of data into the IOPS tablespace.
As reported by setup.sh the data loading completed in 1,539 seconds or a load rate of roughly 600GB/h. This loading rate by no means shows any intrinsic limit in the loader. In future posts in this series I’ll cover some tuning tips to improve data loading. The following screen shot shows the storage I/O rates in kilobytes during a portion of the load procedure. Please note, this is a 2s16c32t 115w Sandy Bridge Xeon based server. Any storage capable of I/O bursts of roughly 1.7GB/s (i.e., 2 active 8GFC Fibre Channel paths to any enterprise class array) can demonstrate this sort of SLOB data loading throughput.
After setup.sh completes it is good to count how many loader threads were able to successfully load the specified number of rows. As the example shows I simply grep for the value of slob.conf->SCALE from cr_tab_and_load.out. Remember, SLOB in its current form, loads a zeroth schema so the return from such a word count (-l) should be one greater than the number of schemas setup.sh was directed to load.
The next screen shot shows the required execution of the procedure.sql script. This procedure must be executed after any execution of setup.sh.
Finally, one can use the SLOB/misc/tsf.sql script to report the size of the tablespace used by setup.sh. As the following screenshot shows the IOPS tablespace ended up with a little over 270GB which can be accounted for by the size of the tables based on slob.conf, the number of schemas and a little overhead for indexes.
This installment in the series has shown expected screen output from a simple example of data loading. This example used default Oracle Parallel Query settings, a very simple init.ora and a concurrent loading degree of 8 (slob.conf->LOAD_PARALLEL_DEGREE) to load data at a rate of roughly 600GB/h.
Filed under: oracle
Well, I finally passed the Oracle Certified Professional (OCP) 12c upgrade exam, 1Z0-060. I got 86% right on the new features section and 78% correct on the general DBA section. So, that means I missed roughly 7 questions on each section which was more than I expected because I felt good about all of my answers.
I’m happy with the three resources I used which are:
I spent a lot of time in the 12c manuals. Be careful, though, because the 126.96.36.199 manuals just came out and some things are different in 188.8.131.52 from the test, because the test designers based the test on 184.108.40.206.
I built two Linux virtual machines for test preparation. On one I installed the 220.127.116.11 database software and created a CDB and a non-CDB for testing. On the other I installed the grid software and database software and created a CDB on ASM disks. I spent many hours testing 12c features on these two environments and three databases.
I learned a ton both about new 12c features and features that exist in earlier versions, but it was a long and tedious process and I’m glad to finally be done.
Sample application - ADFTransientVOPassivationApp.zip, implements SQL based View Object with transient attributes. Instead of creating completely programmatic View Object, I have created SQL based with single Id attribute based on SQL expression and added transient attributes. This Id attribute is actually never used, it is there just for only reason - to simulate SQL based View Object. As a key attribute is selected one of the transient attributes:
SQL query doesn't fetch any rows, I'm using it only for a single reason - to simulate SQL based View Object. ADF BC knows how to passivate/activate SQL based View Objects automatically, so I'm going to use this feature and force passivation/activation for all transient attributes created for the same View Object:
Just make sure to select Passivate Including All Transient Values option, this will ensure all transient attributes will be passivates/activated automatically, without any coding intervention:
I'm going to test sample application with AM pooling off, this would simulate passivation/activation behaviour for each request:
On ADF UI side, I'm going to implement a table. To be able to enter rows successfully, for such SQL based View Object with transient attributes, I must enable ChangeEventPolicy = ppr option in Page Definition for the iterator:
Input components must be set with AutoSubmit = true option:
On runtime, when I enter rows, all data gets passivated - you can see this from the log. Three rows are entered, data for all attributes gets passivated/activated automatically:
After joining Pythian I was introduced to several configuration management systems and Puppet was one of them. Foreman is a system management tool which can be integrated with Puppet to manage puppet modules and to initiate puppet runs on hosts from web interface. This is very useful if you want to configure large number of systems.
Puppet kick, which was previously used to initiate puppet run from foreman is deprecated now.
For initiating puppet run from foreman interface, I used mcollective. Mcollective can be used to execute parallel jobs in remote systems. There are 3 main components,
- Client – Connects to the mcollective Server and send commands.
- Server – Runs on all managed systems and execute commands.
- Middleware – A message broker like activemq.
I used mcollective puppet module from Puppetlabs for my setup.# puppet module install puppetlabs-mcollective
My setup includes middleware(activemq) and mcollective client in the puppet server and mcollective servers in all managed systems.
After the implementation, I found that Puppet run from foreman web interface is failing for some servers.
I found following in /var/log/foreman-proxy/proxy.log,D, [2014-04-18T07:20:54.392392 #4256] DEBUG — : about to execute: /usr/bin/sudo /usr/bin/mco puppet runonce -I server.pythian.com
W, [2014-04-18T07:20:56.167627 #4256] WARN — : Non-null exit code when executing ‘/usr/bin/sudo/usr/bin/mcopuppetrunonce-Ireg-app-02.prod.tprweb.net’
E, [2014-04-18T07:20:56.175034 #4256] ERROR — : Failed puppet run: Check Log files
You can see that mco command is trying to execute a puppet run in server.pythian.com and failing. mco command uses several sub commands called ‘applications’ to interact with all systems and ‘puppet’ is one of them.
While running the command in commandline, I received following,# mco puppet runonce -I server.pythian.com| [ > ] 0 / 1warn 2014/04/11 08:05:34: client.rb:218:in `start_receiver’ Could not receive all responses. Expected : 1. Received : 0
Finished processing 0 / 1 hosts in 22012.79 ms
No response from:
I am able to ping the server.
When I ran ‘mco ping’ I found that the server with issue is identified with short hostnames and others with fqdn.$ mco pingserver time=89.95 ms
server3.pythian.com time=95.26 ms
server2.pythian.com time=96.16 ms
So mcollective is exporting a short hostname when foreman is expecting an FQDN (Fully Qualified Domain Name) from this server.
Foreman takes node name information from puppet certificate name and that is used for filtering while sending mco commands.
Mcollective exports identity differently. From http://docs.puppetlabs.com/mcollective/configure/server.html#facts-identity-and-classes,identity
The node’s name or identity. This should be unique for each node, but does not need to be.Default: The value of Ruby’s Socket.gethostname method, which is usually the server’s FQDN.
Sample value: web01.example.com
Allowed values: Any string containing only alphanumeric characters, hyphens, and dots — i.e. matching the regular expression /\A[\w\.\-]+\Z/
I passed FQDN as identity in the servers using mcollective module, which resulted in following setting,# cat /etc/mcollective/server.cfg |grep identity
identity = server.pythian.com
This allowed the command to run successfully and getting ‘Puppet Run’ from foreman to work.# mco puppet runonce -I server.pythian.com* [ ============================================================> ] 1 / 1
Now ‘mco ping’ looks good as well.$ mco pingserver.pythian.com time=91.34 ms
server3.pythian.com time=91.23 ms
server2.pythian.com time=82.16 ms
Now let us check why this was happening.
mcollective identity is exported from ruby function Socket.gethostname.
From ruby source code you can see that Socket.gethostname is getting the value from gethostname()../ext/socket/socket.c#ifdef HAVE_GETHOSTNAME
* Socket.gethostname => hostname
* Returns the hostname.
* p Socket.gethostname #=> “hal”
* Note that it is not guaranteed to be able to convert to IP address using gethostbyname, getaddrinfo, etc.
* If you need local IP address, use Socket.ip_address_list.
# define RUBY_MAX_HOST_NAME_LEN NI_MAXHOST
# define RUBY_MAX_HOST_NAME_LEN HOST_NAME_MAX
# define RUBY_MAX_HOST_NAME_LEN 1024
if (gethostname(buf, (int)sizeof buf – 1) < 0)
buf[sizeof buf - 1] = ”;
gethostname is a glibc function which calls uname system call and copy the value from returned nodename.
So when foreman uses the FQDN value which it collects from puppet certificate name, mcollective exports the hostname returned by gethostname().
Now let us see how gethostname() gives different values in different systems.
When passing the complete FQDN in HOSTNAME parameter in /etc/sysconfig/network, we can see that Socket.gethostname is returning FQDN.[root@centos ~]# cat /etc/sysconfig/network
HOSTNAME=centos.pythian.com[root@centos ~]# hostname -v
[root@centos ~]# irb
1.9.3-p484 :001 > require ‘socket’
1.9.3-p484 :002 > Socket.gethostname
1.9.3-p484 :003 >
The system which was having problem was having following configuration.[root@centos ~]# cat /etc/sysconfig/network
HOSTNAME=server[root@centos ~]# hostname -v
[root@centos ~]# irb
1.9.3-p484 :001 > require ‘socket’
1.9.3-p484 :002 > Socket.gethostname
1.9.3-p484 :003 >
Here ruby is only returning the short hostname for Socket.gethostname. But it was having following entry in /etc/hosts.192.168.122.249 server.pythain.com server
This allowed system to resolve FQDN.[root@centos ~]# hostname -f -v
Resolving `server’ …
From ‘man hostname’.The FQDN of the system is the name that the resolver(3) returns for the
host name.Technically: The FQDN is the name gethostbyname(2) returns for the host name returned by gethost-
name(2). The DNS domain name is the part after the first dot.
As the resolver is able to resolve the hostname from /etc/hosts, puppet is able to pick up the fqdn value for certificate which it later used by foreman.
But mcollective exports the short hostname returned by gethostname().
To fix the issue in Red Hat based linux distributions, we can try any of the following,
* Pass an FQDN in /etc/sysconfig/network like below.# cat /etc/sysconfig/network
* Use a short hostname as HOSTNAME but make sure that it would not resolve to an FQDN in /etc/hosts or DNS (not really suggested).
* Pass short hostname or FQDN as HOSTNAME but, make sure that there is an entry like below in /etc/hosts and mcollective is exporting fqdn as identity.192.168.122.249 server.pythian.com server